text
stringlengths
188
632k
Words Of Truth "That I might make thee know the certainty of the words of truth..." (Proverbs 22:21). An Overview Of The Old Testament Part 205 – Wisdom And Inheritance Through Fair Judgment (Ecclesiastes 7:11-22) 1. What is good with an inheritance? 2. In what way does wisdom help a person that money cannot? 3. Can anyone amend something that God has set in a certain way? 4. Has God only given us days of prosperity? 5. Is it possible for a wicked person to have a better life in this world than a righteous person? 6. Must we be careful to avoid extremes in regard to righteousness, wisdom, wickedness, and foolishness? 7. Is wisdom stronger than physical strength? 8. Does the following Scripture apply to all times since humanity has existed: “For there is not a just man upon earth, that doeth good, and sinneth not” (Ecclesiastes 7:20)? 9. Should we take into account things we’ve done that are wrong when we consider what others are doing as well? © 2013 Feel free to use the material on this website, but nothing is to be used for sale! – Brian A. Yeager
So, what is the exposure triangle? Well the exposure triangle is the relationship between aperture, shutter speed and ISO. Each one affects the other. Remember Ohm’s law? Well if you don’t (I’m not even sure they teach it at school these days). It’s basically an electrical equation V Volts (power) = R Ohms (resistance) x I Amps (current) and if you change one you change the other. If you know the power and the resistance then by changing the equation to Volts ÷ Ohms = Amps, you can work out the current (amps). It’s the same for the exposure triangle. The Three Points of the Exposure Triangle Aperture is a hole that lets light in. The bigger the hole the more light you let in. What does that mean? It means the smaller the hole the more in focus – the bigger the hole the less in focus. Let’s say you took a picture with a setting of F2.4. Because the size of the hole is inversely proportional to its setting that’s a large hole. The subject will be in the focus but the background will be blurry. But if you took a picture with a small hole e.g. a setting of F22. then everything will be in focus. That’s not strictly true as the amount in focus starts at the point at which you are focusing the camera out towards, potentially, infinity. 2. Shutter speed This is an easier one to grasp. As with the aperture, the longer the shutter is open the more light you allow in. Shutter speeds are measured in fractions of second to seconds. So, a shutter speed setting of 1/100 is one hundredth of a second or .001. A setting of 5 is 5 seconds. However, the effect is totally different from the aperture. A fast shutter speed will generally freeze motion e.g. sports. Whilst a slow shutter speed will blur motion e.g. a fluid looking waterfall. ISO stands for International Organization for Standardization and in the film days was an indicator of how sensitive to light a film was. An ISO 400 film was 4 times as sensitive to light than a ISO 100 film. On digital cameras that sensitivity is for the camera sensor. When the ISO is increased or decreased you are in effect, making the camera sensor more or less sensitive to light. ISO 100 being the least sensitive whilst ISO 25600 plus is the most sensitive. So, there you have the three points of the exposure triangle. I’ve already mentioned how each one affects the other but let’s see how that affects, for example, Product photography and look at a couple of specific examples. OK, so we’ve got this new beaut product that we want to take to the market place and we want to create some really cool images to use for marketing. Let’s say it’s this toy soldier We’re taking the picture inside which is fairly bright, even though we’re using just natural light. To make sure we get a properly exposed image we are going to take a couple at different settings. The first image has an aperture of f5 and a ISO of 800 whilst the camera has calculated the shutter speed. You can see everything is in focus and the light looks fairly normal. But look closely and you’ll see that it looks a bit soft e.g. slightly blurred? That’s because the shutter speed is too low at 1/20 which means I couldn’t hold it still enough. The second image had an aperture of f1.2 and a ISO of 2000. Here you can see that the soldier is in focus whilst the background is blurred (bokeh). That’s a fairly common way for ensuring that the spotlight is on the product and not distracted by the back ground. Because of the high ISO the shutter speed is 1/200 which means the image is much sharper. That’s because the shutter speed is now 10 times as fast and is a bit more forgiving. As an aside if you think you can hold a camera still regardless, think again. Our hands move without us knowing. However, the disadvantage of having a high ISO is that the image could have noise which may or may not be an issue. As a rule of thumb, with newer cameras the higher the maximum ISO setting on the camera the less likely noise will occur at lower settings. E.G. The OMD1 MKII Mirrorless Camera that I have, has a maximum setting of 25600. Realistically I don’t really see any noise in an image until around 1000. Compare that with my Galaxy S8 Smartphone Camera that has a maximum ISO of 800. With this one I notice noise at around 400 ISO Of course, there are ways of reducing or getting rid of noise in post-production with software like Adobe Lightroom. So, it’s not that a big issue. Below are two examples of noise from two different cameras. There you have it, that’s the exposure triangle. Did you understand all that? I’m happy to clarify anything you’re not sure of. Just post in the comments section below. Don’t be shy. Thanks for reading this article There must have been something that piqued your interest. Is it that you see yourself taking some great travel photos that you can share or display? Or is it you can see yourself reliving your travel experience by bringing home some emotive travel photos? Maybe you aspire to getting your travel photos published. If one or all of these is YOUR goal, I can help. If you sign up in the box below you’ll get my free eBook “9 ways to improve your Travel Photography”. At the same time, you will also subscribe to my Travel Photo Tips Newsletter. Remember if you want to take great travel photos • that you can share and display. • that help you relive your travel experience • give you a chance to have your travel photos published then sign up below and subscribe to my Travel Photo Tips Newsletter and for your trouble get my eBook “9 ways to improve your Travel Photography” Any questions then please write your comments below or contact me here and please say hello at these places:
Faith based Enlightenment could be the “finish idea of a predicament.” The term can often be made use of interchangeably with faith based arising, but is often helpful to signify the age of enlightenment, which started out with Buddha’s enlightenment. It basically means numerous Buddhist thoughts and terms and conditions, like system, Buddha, and leg. In Buddhist vocabulary, non secular enlightenment doesn’t imply “arising.” Inside the Mahayana Buddhism, enlightenment isn’t an condition of mindset that can be achieved. In its place, it is really a number of waking up encounters which have been related, based upon that the individual who experiences them solutions accomplishing this. For Buddhists, there’s no these kinds of element as psychic enlightenment. While Buddhists think that enlightenment may be possible, there is not any strict organization that could create it for other people. That is a really useful thinking. If enlightenment would be a compound which might be imbued, the materialistic globe would be not whole, mindful about could be no need for enlightenment in the first place. Enlightenment doesn’t have any information or talent. Having said that, it’s possible for enlightenment prefer as a result of particular understanding, mastering, exercise, etc. It’s not at all reliant on to be able to assume just outside of yourself, neither to do with acquiring particular amounts of spiritual thinking ability or a type of esoteric information. The majority of people who practical experience enlightenment never be ready to possibly come to be Buddha. I am not saying that enlightenment isn’t feasible, nevertheless. third secret of fatima https://www.revelationcodealphabroken.com as enlightenment could happen to someone that exists to a spiritual history, it may happen to anyone who opts to have the Buddha’s instructing by way of the concept of yoga and contemplation. The purpose of anyone that seeks enlightenment is to find the condition of head that permits him or her to completely have an understanding of her or his romantic relationship with existence. Enlightenment needs a sort of discipline. Folks who seek strict enlightenment normally make some type of resolve forpersistance to their beliefs just before they participate in their lookup. Whether they pick Buddhism or any other religion is negligible. The main thing is they spend themselves to the next technique of thinking and acting, specifically when it comes to the difficulty of life’s indicating. Faith based enlightenment doesn’t involve an individual to modify her / his total attitude towards life. This doesn’t require a person to travel from being not aware about your faith based past and start contemplating due to the techniques of Christianity or Judaism. To acquire visit this website link based enlightenment, an individual will have to initially be aware that value of a person’s religion, and agree to being familiar with it. Emancipation from religion, and also the flexibility to locate enlightenment, are a couple of crucial principles from the course top to enlightenment. Actually, non secular enlightenment is more than simply finding Lord or Buddha. To expect understanding how to make the most of lifestyle, without the expectations about who an example may be and what they are. There are several paths that lead to enlightenment. Our Web Site decide to take a trip the strict direction by using a faith based online community, while others turn to choice psychic tracks for example the Tibetan Buddhist. All of these pathways features its own set of routines and concepts, with each having its benefits and dangers. Enlightenment needs the individual to learn about the physical body. Your mind can be telephoned byyoga and consideration, and employ. Though meditating aids the heart and soul gain Nirvana, contemplation allows our bodies achieve Nirvana. Mental performance must be purged of coming from all views and ideas relevant to existence, and demise. Ignited beings assume responsibilty for his or her measures, simply because they have taken the initial step for enlightenment, so as to get to Nirvana. They recognize that they need to come across methods their queries and come to grips with the fact they already know that. are here on this planet, to reside in and practical knowledge, get the job done and appreciate, and are living, undergo and appreciate. Enlightenment calls for endurance, and practice. The intellect have to develop into more comfortable with its natural environment, along with it’s commitments. Just after an interval, the person will accomplish Nirvana. And at this point, those will end up at ease.
Types of dialysis supply In terms of the Report on dialysis treatment and kidney transplants in Germany of the Project Office "Quality Control in Renal Replacement Therapy (Quasi-Niere gGmbH): There are different procedures of dialysis: Centre dialysis: dialysis treatment of patients inside of a dialysis facility who require permanent presence of a practitioner. LC-dialysis (limited care): the patient is inside of a dialysis facility and performs the dialysis largely independently. Partly-inpatient dialysis: a dialysis procedure of patients, in whom on additional health risk requries close monitoring and an opportunity to take the patient to an intensive care unit if necessary. Inpatient dialysis: a dialysis procedure for patients, who are hospitalized because of a severe illness, even if not dependent on dialysis. Selected information about Types of dialysis supply: - Kidney substitute therapy, dialysis patients (1997-2006) - Kidney substitute therapy, places for hemodialysis (1997-2006) - Renal replacement therapy, new cases (starting 2010) - Renal replacement therapy, treatment procedures (starting 2010) Further information can be found on the topic or keyword search.
Earth Day 2020 will be different from Earth Day activities in the past. With the Corona virus keeping us away from our students we've had to get creative with our activities. But we're determined to keep educating our students about the importance of being a good custodian of the Earth. Premier Homeschool Consultants is offering online education opportunities through Outschool so we can reach children all around the world. Check out some of our classes below: Let's celebrate Earth Day by learning about the environmental impact of our food choices. In this class students will watch a brief slideshow and listen to a lecture as I explain the global rise of protein consumption. After this I will guide students through the environmental impact of raising 1 pound each of beef, pork, poultry, and soy through a hands-on activity. Students will create a visual representation of the resources required for each of these foods. Additionally we will work as a group to analyze line graphs that represent the past, present, and future rate of protein consumption. After these are complete we will discuss as a class the impact our food choices have on the environment and what we can do in the future to make environmentally friendly choices. earth pop art: parts of speech (beginner) earth pop art: parts of speech (intermediate) Videos are such a fun way to introduce or explore new topics. For visual learners videos can be the key to understanding complex concepts. With the freedom that homeschool gives us we can explore any topic that our kids find interesting instead of having to stick to a rigid set of standards. That way our kids are happy to learn and we're happy to discuss complex topics with our soon-to-be-adults! Jen and I have discussed some of our favorite videos from Amazon Prime that we use to teach social history. Here they are in no particular order! Hope you find a new teaching tool your teen loves! The men who built america How we got to now Jonestown: paradise lost We hope you found a new must-watch video among our favorites! We have used these videos for homeschool kids in grades 8-12, it really depends on what the teen is interested in and their learning style. Follow our blog for more teaching tools and lessons that we use this year with our homeschoolers! Have an opinion about one of these videos? Teaching tips for other homeschool parents? Comment or share and we'll add them to the post! Choosing a school for your child can be a daunting task, especially when you consider the influence that choice will have on your child's future. Most parents think they have to choose either public, private, or online education. However there is an alternative to these choices that could lead to your child growing by leaps and bounds this year. One teaching method that is extremely beneficial for many children is one-on-one in-person teaching. Why is one-on-one education so important to some children? Read on to find out: Reason 2: Adaptive to learning style and special needs One-on-one in-person education allows your child's teacher to tailor lessons to fit any learning style or special needs. In traditional public and private schools most teaching is geared toward students who learn either through listening, reading, or writing. This leaves many children behind not because they can't do the work but because they can't learn the way they're being taught. With one-on-one education provided by experienced professionals all student needs can be addressed so no child gets left behind. There are many theories of learning styles that teachers know about but are often unable to apply in a classroom setting. When a teacher only has one student then suddenly there are many choices for how a lesson can be taught, especially if that teacher has experience working with other children who have had the same experiences. Additionally, when students have special needs, either EBD or LD, there are limited interventions that traditional schools can offer. With one-on-one education the teacher has time to learn about special needs in general and your child's needs specifically and adapt lessons to best benefit your child. Reason 3: Personalized curriculum When your child is taught one-on-one and in-person then every piece of curriculum is chosen specifically for him or her. Every assignment can be tailored to your child's needs, interests, and abilities. No child can fall behind in a one-on-one environment because the teacher won't move on unless their student fully understands a concept. If your child is having a difficult day in a one-on-one school environment then the teacher can easily improvise a new lesson with ease, unlike at a traditional school where teachers have one lesson that must be followed for the day. With the ability to work with your child's interests, teachers are able to show more real-world application activities for their students and introduce critical thinking skills that turn struggling students into life-long learners. Because of these unique teaching techniques PHC's teachers can use we rarely hear the question, "When am I ever going to use this?" Home-based private tutors for full-time education are within your grasp at Premier Homeschool Consultants (PHC). We will create a program specifically for your child that constantly adapts to meet his or her needs while also meeting the educational standards for the state of Minnesota. The main benefit of choosing PHC for your child's schooling this year is the consistent one-on-one in-person teaching. This is a need for many children, and unfortunately there are few school systems in Minnesota that can provide this for your family. PHC, however, is fully prepared to work with your family to design a program that is tailored to your child's learning style, special needs, personality, and even specific interests. Call us today at 612.643.1132 or email us at [email protected] to learn more about how Premier Homeschool Consultants can help your child experience educational success!
I Have Heard Thy Speech And Was Afraid Outline By: Brian A. Yeager A. People often think that God will not hold them accountable for their deeds (Zephaniah 1:12). 1. Some like to think that God will not remember their evil deeds (Psalms 10:11-13). 2. Some think they can hide their sins from God (Psalms 64:5, Psalms 73:11, and Ezekiel 8:12). 3. Some think the Judgment Day isnŐt ever coming (II Peter 3:3-4). B. The Lord cannot lie (Titus 1:2) and He has PROMISED... 1. Judgment upon the world wherein everyone will be accountable (John 5:28-29, Acts 17:31, and II Peter 3:9-10). 2. That He will not forget sins that arenŐt repented of (Jeremiah 14:10 and Romans 14:12). 3. That no one can hide his or her evil deeds from Him (Psalms 44:21, Psalms 94:7-11, and Revelation 2:20-23). II. Body: ŇÉI have heard thy speech, and was afraidÉÓ (Habakkuk 3:1-2). A. The prophet Habakkuk had seen visions of things he was to write about so that people would run after reading it (Habakkuk 2:1-4). 1. The judgment of our Lord is something that will shake you up (I Samuel 3:11, II Kings 21:12, and Jeremiah 19:3). 2. GodŐs judgment is to be feared (Psalms 119:120 and Hebrews 10:26-31). 3. Think about thisÉ God had asked His prophets to warn the people from Him (Ezekiel 3:17-21). B. Some people have the attitude of, ŇI will just let God do whatever He wants to meÓ as they mock the Lord (Isaiah 5:18-19 and Jeremiah 17:15). 1. Do you really want to find out what GodŐs worst looks like (Matthew 10:28 and Mark 9:42-47)? 2. Those who perish, being judged by God, will have no second chances (Proverbs 29:1 and Matthew 25:41). 3. There is no escape (Romans 2:1-3). C. Hear what the Lord has said, fear it, and act upon it (Philippians 2:12). 1. DonŐt get Ňat easeÓ, but serve Him with fear (Psalms 2:11). 2. Tremble at GodŐs word (Isaiah 66:2). 3. Let fear move you (Hebrews 11:7). 4. Christ is the source of salvation to those who obey (Hebrews 5:8-9). III. Conclusion: After fear moves you to our Lord, let love keep you in Him (John 14:23). © 2012 This material may not be used for sale or other means to have financial gain. Use this as a tool for your own studies if such is helpful! Preachers are welcome to this work, but please do not use my work so that you can be lazy and not do your own studies. – Brian A. Yeager
To what extent did the American Revolution fundamentally change American society? In your answer be sure to address the political, social, and economic effects of the Revolution in the period from 1775 to 1800. Notes from Mr. Williams: This essay was g iven to 2 period APUSH on their first in class essay. Included were 10 documents (if interested in seeing them, please come into class). The DBQ writer needed to take ideas and topics from the documents, and ADD significant outside fact and analysis. Notice that this writer does a complete job. He/she deals with the entire question (deals with extent) and answers the question from economic, political and social points of view. Additionally, he/she used a significant amount of documents and included o After the American Revolution, Americans, who were free of British control, started to reevaluate politics, the economy and society. After breaking away from what they thought was a corrupt and evil government, Americans changed how the y wanted to govern their society, even though they ultimately reverted to a more centralized government similar to Britain. The uneducated masses, as viewed by the elite, didn’t experience a lot of change though the ideals from the revolution still guided some to seek better financial opportunities. Women, slaves, and loyalist experienced a considerable amount of change in society as women experienced more freedoms, some slaves were set free, and loyalist left America. Overall, America didn’t experience a l ot of economic change, but it did experience, to varying degrees, political and social change. Please join StudyMode to read the full document
A standard food pyramid is a triangular diagram representing how much of each food group you should eat daily for optimal health. The United States Department of Agriculture made its first food pyramid in 1992 based on meta-analyses. But starting 2011, MyPlate has replaced the standard Food Pyramid. The problem with these kinds of guidelines is that they’re very flawed, confusing, and generic. Critics also claim that food lobbyists played a much too influential role in shaping the standard food pyramid as we know it 1. Still, food pyramids can be a useful tool for learning about nutrition, even if you’re on a keto diet. To help you learn about keto nutrition, the keto food pyramid gives you a visual representation of how this diet looks like. You’ll see that this food pyramid is different from standard food pyramids with some even being turned upside down, literally. A Look at the Keto Food Pyramid Unlike standard food pyramids, with carbs at their base and fats at the very end, the keto food pyramid has fats at the base and carbs at the end. The keto food pyramid is based on the macro recommendations for this particular diet: - 5% calories coming from carbs - 25% calories coming from proteins - 70% calories coming from fats Macros, or macronutrients, are nutrients you need in large amounts for energy, development, and functioning. Some argue that water and fiber are also macronutrients. In contrast, micronutrients are those you need in small amounts for health and functioning. The latter include vitamins and minerals. The 70% of calories in the form of fat should ideally come from healthy sources on a keto diet. That’s why foods like avocados, olive oil, coconuts, fish, and eggs are shown as the backbone of keto eating. So, it’s not just about macronutrient ratios – the quality of the food from which these macros come from is also vital. More on Following the Keto Food Pyramid You probably also noticed that the standard food pyramid lists whole grains, legumes, pasta, rice, and other carb-rich foods at the bottom. Well, the keto food pyramid completely excludes these sources of carbs and replaces them with: - Leafy Greens - Low-Carb Vegetables - Nuts & Seeds - Low-Carb Fruit These foods are generally low in carbs, but high in fiber. The reason it is important to stick to low-carb food choices is to: A. help lower your carb intake so you can reach ketosis B. still meet your minimal needs for carbohydrates as well as vitamins, minerals, and fiber. Although carbs are reduced to a miniscule 30 grams a day on keto, that doesn’t mean these macros are not important, so you might just as well not eat them at all. Carbs are, in fact, very important — it’s just that most people are eating too much. And as already mentioned, lowering carb intake is non-negotiable if you want to reach ketosis. What About Protein? Many mistakenly believe that the keto diet is a high-protein diet just like Paleo or Atkins. In reality, keto is a high-fat, moderate-protein, and very low-carb diet. Moderate is defined as 25% of your daily calories consisting of protein foods. That’s around 125 grams of protein for a 2,000-calorie diet. The reason you also need to limit protein on keto is that your body is able to turn protein into glucose when necessary. This is done via a metabolic pathway called gluconeogenesis 2. Gluconeogenesis is meant to keep your blood glucose levels high, but you don’t want that on keto. And yet, you also don’t want to eat too little protein. Protein is essential for muscle growth, tissue repair, and the production of hormones and enzymes 3. As long as you are eating the recommended amounts of protein every day and sticking to these limits, you’ll be in ketosis and stay healthy. And as for protein sources, the keto food pyramid shows that it’s best to opt for foods that contain high levels of both fats and protein like fatty fish, fatty cuts of meat, pork, nuts, seeds, and full-fat dairy. This way, you’ll make sure you are eating enough fat while taking in a moderate amount of protein. Why Food Quality Matters Looking at the keto food pyramid, you likely noticed that the foods listed are real, wholesome foods. The keto diet truly is based on eating wholesome, minimally processed ingredients. There are two reasons for this: 1. Easier to track your macros In order to keep track of your macros, you need to know what’s in your food. That’s much harder when you’re eating pre-packaged, ready-made meals. 2. Essential for health The ultimate goal of keto is better health. A sure way to improve health is eating wholesome foods as these contain healthful nutrients and are free of additives, preservatives, hidden carbs, pesticides, and other harmful chemicals. Generally speaking, you should ideally go for organic, grass-fed, and Pasture-raised ingredients which you will find on the keto food pyramid. If that’s not possible, simply go for the least processed ingredients you can find. Good examples of quality foods to fill your pantry with include: Studies show that milk from grass-fed cows contained more omega-3 fatty acids and antioxidants than cows fed hay or grains 4, 5. Studies also show that our diets are deficient in these particular essential acids 6. Leafy greens are good to eat on every diet, but especially on keto. These are low in carbs, high in fiber, and rich in vitamins, minerals, and flavonoids. Studies show that a diet rich in leafy, green vegetables protects the brain against age-related cognitive decline 9. Olive oil, coconut oil, and macadamia nut oil are all great keto options. These are at the base of the keto food pyramid because they’re 100% fat. Adding them to every meal helps you meet your daily requirements for fat. And besides that, you also boost your health, especially with MUFA-rich olive and macadamia nut oils. Pasture-raised & free-range Animals raised this way experience lower levels of stress, see more sunlight, and are generally healthier than conventionally farmed animals. As a result, the meat, eggs, and dairy from these animals are healthier but also tastier. Since you’ll be eating more of these foods on keto, it’s best to pick the healthiest kind. The keto food pyramid often does not list keto staples like coconut and almond flour, almond milk, peanut butter, lard, stevia, MCT oil, or collagen peptides anywhere on the keto food pyramid. That’s because these ingredients are not necessary for the keto diet to work or for your health. They are, however, helpful, and you can see them as great additions to your basic keto plan. The keto food pyramid is almost like your standard food pyramid turned upside down. At its base, you’ll find high-fat foods like olive oil, coconut, avocados, and bacon. At its end, you’ll see berries and other low-carb fruit. And in between, the keto food pyramid lists protein-rich foods and low-carb vegetables. This arrangement is entirely based on keto macros, which are at the center of the ketogenic diet. On keto, it’s all about the macros, so you need to make sure you’re eating the right kind of food to stay within your limits. The keto food pyramid helps you with that by giving you a visual representation of the ketogenic diet. Keep in mind though, that the keto food pyramid isn’t perfect. The keto diet is too complex to be explained in one triangular diagram. That’s why you may want to take note of keto supplements and low-carb alternatives on your keto journey.
Wide-ranging survey considers Spain’s conquest of much of the Americas in the light of conditions and developments back home. Kamen (The Spanish Inquisition, 1998, etc.) amplifies here his previously stated view that the Spanish military adventure in the New World was Spanish in name only; it relied on legions of foreign mercenaries, Catholics displaced from Protestant lands in rebellion, and would-be Crusaders, all of whom served in far greater numbers than Spaniards themselves. It relied, too, on the cooperation of conquered peoples. The Spanish assumed control of local polities, Kamen notes, by “placing themselves at the top in the place previously occupied by the Aztecs and Incas” but otherwise leaving the pyramid of power largely intact. The process of conquest helped Spain forge itself as a nation; where formerly it had been a congeries of small kingdoms united only provisionally by the task of driving out the Moors, in the face of the common goal of subduing faraway lands “the Galician, the proud Asturian and the rude inhabitant of the Pyrenees,” in the words of a contemporary observer, joined with fighters from Castile, La Mancha, and Andalusia to create something new: Spain. This is a history of large forces moving sometimes of their own accord and by their own logic: the institutions, for example, that slowly replaced adventurers and conquistadors with bureaucrats, and the elaborate trade networks that developed to cart off and distribute all that New World loot to a waiting Europe. Kamen does a fine job of answering such thorny questions as: “Who gave the men, who supplied the credit, who arranged the transactions, who built the ships, who made the guns?” Well written and exactingly researched, of much appeal for professional historians and general readers with an interest in the world-systems view of things.
Fisher packs a lot—if not exactly everything, or perhaps not even some of the most important things—into this compendium of basic concepts for young children: letters, numbers up to 20, colors, shapes, opposites, seasons. The title indulges in a bit of hyperbole, perhaps as a lure to a certain kind of nervous but ambitious parent. Small toys, objects and plastic dolls are lined up, combined or used to create clever tableaus to photographically illustrate each concept. Mixing colors, for instance, employs plastic ducks in various shades to demonstrate the result of color combinations. The superb clarity and rich, saturated colors of these photos create page openings that are nearly startling in their brightness. While the people figures are nicely retro with their bland, naive faces, there’s little diversity demonstrated or implied. And the collection of concepts misses a bet in another important way: For all the charming silliness going on in many of these miniature scenes, others seem static. It’s funny to see tiny figures in aprons and hair buns cleaning up an enormous ladybug, but literal-minded young readers will search the image in vain to find any of those abstract essential concepts (being a friend, taking care of the earth, asking for help) one ought to know before age five. Cheerful, if not exactly essential, fun. (Picture book. 2-6)
There are more than 11 million people currently living with a disability, impairment or limiting long-term illness in the UK, but what support is available? Living with a disability can be an incredibly long journey, physically and mentally, both for the individual and the people who live with and/or care for them. But there is help and many resources available – you just need to know what to look for. The first step is knowing when an extra hand may be needed. If you are unable to work, there are many benefits available, as well as people to help you claim for them. The NHS also provides a range of support services to help you and your loved ones manage the disability. If you are a student at school, college or university, your student services team should be able to offer help and support throughout your studies. If you care for someone with a disability, there is also support available for you. Particularly in severe cases, it is important for you to rest and take care of yourself too. Find out more about carer support on our dedicated fact-sheet. Disability counselling can be helpful for many reasons, sometimes you just need to talk, while other times you may need professional guidance. If you’re suddenly classed as disabled, due to an accident or a serious health condition, such as cancer, counselling may help. Speaking to a counsellor can help you understand the changes going on in your life, as well as simply providing some of the support you need. They can also help you know your options. Living with a disability can often lead to financial hardship and low social support. These experiences can then lead to mental health problems, such as anxiety and depression. Counselling can help you address these issues, as well as teaching you coping techniques to help you adapt to the changes living with a disability can bring. Finding a counsellor If you have a disability or care for an individual living with a disability and would like to speak to a professional, you can find a counsellor near you using our advanced search tool. We understand that reaching out and seeking help can be a daunting process, but we’ve tried to make it as easy as possible. You can learn more about disability and carer support on our fact-sheets and research our counsellors until you find someone you resonate with. We encourage our members to fill their profile with as much information as possible, so you can understand how they work and if they are right for you. Remember you are not alone and support is available, you just need to take the first step.
Near Eastern Archaeology: A Reader Annotation Filling a gap in classroom texts, more than 60 essays by major scholars in the field have been gathered to create the most up-to-date and complete book available on Levantine and Near Eastern archaeology. The book is divided into two sections: "Theory, Method, and Context," and "Cultural Phases and Topics," which together provide both methodological and areal coverage of the subject. The text is complemented by many line drawings and photographs. Includes a foreword by W.G. Dever. Τι λένε οι χρήστες - Σύνταξη κριτικής The book is a large collection of articles about Archaeology. I have not read extensively on the subject, so muck of it was new to me. It covers time periods mostly prior to the biblical narrative, so ... Ανάγνωση ολόκληρης της κριτικής Paleoenvironments of the Levant 39 άλλες ενότητες δεν εμφανίζονται
At the doctor’s office, hospital, or clinic, patients rarely consider the medical equipment around them. Medical equipment is an integral part of diagnosis, monitoring, and therapy. Even the simplest physical exam can often require a variety of high-tech medical equipment. In 15th century Europe, during and after the horrors of the bubonic plague, autopsies began to be performed at universities, and a primitive form of ‘scientific method’ began to take hold in the minds of the educated. Practical surgery and anatomy studies began. These curious medieval Europeans laid the foundation for modern science. They also laid the foundation for the well known process of identifying a problem, creating a hypothesis, testing the hypothesis by most importantly observing and experimenting; interpreting the data and drawing a conclusion. Medical equipment prior to and even during the scientific revolution was based on classical Greek and Roman theories about science, which were not based on science at all, but on philosophy and superstition. Human health was viewed as a balance of 4 internal ‘humors’ in the body. The 4 humors– blood, yellow bile, black bile, and phlegm, were analogous to the 4 elements of the universe to the classical thinker, fire, air, water, and earth. Ailments, both physical and mental, were caused by an imbalance of humors. The ideal mind and body balanced all 4 humors, gracefully. To heal, doctors prescribed foods or procedures which would balance the fluids in the body. Some of the prescriptions seem to make sense– fevers were treated with cold, dry temperature to combat the hot, wet over stimulation in the body. But when that failed, often the next step was blood letting. Unnecessary purging and enemas were also common cures, which might have helped some people, but also might have caused more problems than they solved. George Washington’s death has recently been attributed, not to the strep throat he probably had as he died, but to the bloodletting and mercury enema given to him to cure it.Not-quite-scientific medical cures are still available and used by many, even today. Since the 15th century, Western science has focused on examining and observing the body, and has created tools to make this easier. X-ray imaging and today MRI devices are merely extensions of the first autopsies and anatomical studies, which strove to understand how the human body actually operates. Diagnostic instruments like ophthalmoscopes, blood pressure monitors, and stethoscopes are likewise extensions of the medieval examination. Exam tables, gloves, and other medical accessories are simply the newest versions of tools that have been used for centuries. Medical technology and medical knowledge feed off of each other. Take for instance hypertension. Although devices for measuring blood pressure have existed for over 100 years, only in the last 20 years have the connections of blood pressure to disease, genetics, and lifestyle been fully explored. As the importance of measuring blood pressure increased, new technologies were explored to keep accurate measurements and records. It wasn’t until the prevalence of automatic blood pressure monitors that a correlation could be made between readings taken by a human and readings taken in a controlled, isolated environment. The medical equipment and the medical knowledge then form a constantly twisting Gordian Knot, one side tightening, as the other loosens, back and forth. What does the future hold for this push and pull of technology and scientific inquiry? Recent developments in nanotechnology and genetics, along with more and more powerful supercomputers might create a situation where what it means to be human actually changes, due to technology. For example, scientists have actually created simple life forms out of previously non-living DNA material. While it doesn’t seem that dramatic at first glance, it’s an important development. Medical equipment acts as an extension for investigation of the how’s and why’s of the human body, and as science catches up and surpasses the investigations, completely new kinds of medical diagnosis, monitoring and therapy may result. Imagine the ability to grow new organs inside the body. Limb re-growth is possible in other organisms, why not in humans? And if it is possible, would the developments be truly ‘human?’ The future is unknowable; the only aspect about it we can understand is that it will look nothing like we could have previously imagined. In retrospect, we’ll see the signs, like we always do, but this is hindsight, not foresight. Presently, technology marches forward and it continues, as a process, to change human life.
42 CFR 460.114 - Restraints. (a) The PACE organization must limit use of restraints to the least restrictive and most effective method available. The term restraint includes either a physical restraint or a chemical restraint. (1) A physical restraint is any manual method or physical or mechanical device, materials, or equipment attached or adjacent to the participant's body that he or she cannot easily remove that restricts freedom of movement or normal access to one's body. (2) A chemical restraint is a medication used to control behavior or to restrict the participant's freedom of movement and is not a standard treatment for the participant's medical or psychiatric condition. (b) If the interdisciplinary team determines that a restraint is needed to ensure the participant's physical safety or the safety of others, the use must meet the following conditions: (1) Be imposed for a defined, limited period of time, based upon the assessed needs of the participant. (3) Be imposed only when other less restrictive measures have been found to be ineffective to protect the participant or others from harm. Title 42 published on 2014-10-01. No entries appear in the Federal Register after this date, for 42 CFR Part 460.
As the common name suggests this fungus is most likely to be found under or near Cedar trees, although it can appear sometimes near Yew trees, suggesting that formerly Cedar trees might have been nearby. It is to be seen from late Winter to late Spring. It develops as an underground sphere and then slowly becomes visible as it pushes through the soil. Below is an image of a young sphere just becoming visible as it emerges in the soil. This fungus can easily be overlooked as it tends to blend in with the soil. It is also a challenge to photograph. Cedar Cup is uncommon. It has patchy distribution, tending to be found in the south of the UK, in fact south of the Severn to the Humber. Below is a sequence of images showing the Cedar Cup is varying stages of development and showing its characteristics. |Showing young starting to open up| |Showing the cup starting to split into eventual rays| |Showing interior and hairy texture| Characteristics: Cup up to 7-8 cm across. Firstly a sphere lying just below the soil. It breaks through in small groups, sometimes very close together and even over-lapping. At maturity it splits into several rays. The exterior is light to medium brown and is covered in dark hairs. The interior is smooth and pale buff or cream. Not edible and is uncommon with patchy distribution mostly in the south of the UK. To be found with Cedars. With grateful thanks to JP for allowing me to photograph this fungus in her garden and also to Howard Williams for undertaking the spore print analysis, adding the details to the CATE National Database, and sending me the three images below showing details of the spores. |Showing the splitting into rays at maturity| |8-spored uniseriate asci with smooth spores| |Showing coarse septate surface hairs|
The resource has been added to your collection In this case study, students are asked to consider whether there is evidence to adequately support a series of scientific claims made in an advertisement for pheromones. The case teaches students about the scientific method and the process of science. Designed for use in advanced, average, and below average high school (grades 9-12) biology classes, it could also be used in AP Biology or in an introductory college biology course. This resource has not yet been reviewed. Not Rated Yet.
The Morris Jastrow Near Eastern Studies Collection (5 Vols.) is a journey into the fascinating era of Babylon and Assyria, guided by one of the 20th century’s preeminent scholars of that period. A Professor of Semitic languages, Morris Jastrow’s astute research awards extraordinary insight into the ancient Euphrates Valley and its people. Jastrow provides a comprehensive view of the culture and religion that existed in Mesopotamia, including an examination of their languages, deities, myths and legends, and worship practices, as well as their advances into astrology, astronomy, philosophy, and mathematics. With the Assyro-Babylonian epoch occupying a prominent place both in the historical and in the prophetical literature of the Old Testament, Jastrow’s groundbreaking work is indispensable for gaining a complete understanding of the Bible. Also included in this collection is Morris Jastrow’s controversial book Zionism and the Future of Palestine, a treatise on political Zionism. Written just after the first World War, this volume is historically important for its unique perspective and war-weary reflection. With Logos Bible Software, the Morris Jastrow Near Eastern Studies Collection (5 Vols.) is now completely searchable, with passages of scripture appearing on mouse-over, as well as being linked to Greek and Hebrew texts and English translations in your library. - Detailed maps and illustrations - Discussion of Scriptural landmarks - Thorough appendices Praise for the Print Edition . . . . the most industrious of the writers on Babylonian subjects, and his long researches into the various phases of Babylonian religion have made him the foremost authority upon this subject. —The American Journal of Theology - Title: Morris Jastrow Near Eastern Studies Collection (5 vols.) - Author: Morris Jastrow - Volumes: 5 - Pages: 2,236 About Morris Jastrow Morris Jastrow (1861–1921) graduated from the University of Pennsylvania where he became a Professor of Semitic languages and worked in the school’s library. He served as an editor for the Jewish Publication Society’s Jewish Encyclopedia from 1911–1906. A prolific researcher and writer, Jastrow published over a dozen books and became president of the American Oriental Society in 1915.
Update: According to New Brunswick Today’s Richard Rabinowitz, unmanned aerial vehicles have been in New Brunswick since at least 2009. NEW BRUNSWICK, NJ—Rutgers University will be testing unmanned aerial vehicles, commonly referred to as drones, for use by U.S. government agencies. The Federal Aviation Administration (FAA) announced plans on Monday to test unmanned flying aircrafts at several colleges including Rutgers, Virginia Tech, and the University of Maryland. “These test sites will give us valuable information about how best to ensure the safe introduction of this advanced technology into our nation's skies," said Anthony Foxx, the head of U.S. Department of Transportation. The FAA explained in a press release that officials “considered geography, climate, location of ground infrastructure, research needs, airspace use, safety, aviation experience and risk.” “Each test site operator will manage the test site in a way that will give access to parties interested in using the site,” reads the press release. “The FAA’s role is to ensure each operator sets up a safe testing environment and to provide oversight that guarantees each site operates under strict safety standards.” Earlier this year, Congress passed a bill urging the FAA to open the skies by September 2015 for use unmanned drone use nationwide. Parties interested in flying drones would include law enforcement, government agencies, for-profit businesses like farming or photography, "hobbyists," and fire departments. "Today, UAS perform border and port surveillance, help with scientific research and environmental monitoring, support public safety by law enforcement agencies, help state universities conduct research, and support various other missions for government entities." The FAA’s Destination 2025 “is a vision that captures the future we will strive to achieve – to transform the Nation’s aviation system by 2025.” “Manned and unmanned flights will each achieve safe flight, as will commercial launches to space.” The New Jersey Full Assembly is scheduled to vote on a bill (A4073/S2702) proposing restrictions on what types of unmanned aircrafts can fly or hover over NJ. The bill passed the Senate by a vote of 36-0 last year, and also recently passed the Assembly Homeland Security and State Preparedness Committee. The proposed law “prohibits drones from being equipped with an ‘antipersonnel device’…[such as] a firearm or any prohibited weapon or device or any other projectile designed to harm, incapacitate, or otherwise negatively impact a human being.” “Information or records of a verbal or video communication derived from the use of an unmanned aerial vehicle shall be strictly safeguarded and shall not be made available or disclosed to the public or any third party.” Assemblyman Daniel Benson (D-14) says that the bill ensures a “basic framework that protects privacy… It’s important that we have this ahead of expected use in the future.” At NewBrunswickToday.com, all of our articles are provided free of charge to improve the level of discourse in our community. If you want to support this type of journalism, please consider making an online donation.
The Portugal flag was officially adopted on June 30, 1911. Green is representative of King Henry the Navigator, a famed Portuguese explorer. The centered shield is representative of ocean exploration and the expansion of Portugal's influence during the reign of King Afonso Henriques. Red recalls the internal revolution of the early 1800s. Portugal Coat of Arms: The result of hundreds of years of modifications, Portugal's current coat of arms was officially adopted on June 30, 1911, and is based on the arms used by the kingdom since the Middle Ages. The shield resting in front is composed of seven golden castles, which represent the Moorish castles conquered during the Reconquista.Behind the shield is an armillary sphere, which was a navigational instrument, and symbolizes Portugal's importance during the Age of Discovery. Flag of Portuguese Prime Minister Portuguese President of the Republic flag Portuguese Assembly of the Republic flag
Mug Shots: Bots Scour Google Maps to Find Faces in the Land - 6:29 am | We humans tend to see faces where they don’t actually exist. Clouds, the moon, grilled cheese; it’s all a canvas for our imaginations. The psychological tendency to see meaningful images in vague visuals actually has a name—pareidolia—and it’s the basis for a mesmerizing new project. Berlin-based design studio Onformative created Google Faces, an algorithm-based system that searches Google Maps’ satellite images for landscapes that resemble the human face. The design team, made of up Cedric Kiefer and Julia Laub, stumbled on the idea after previous facial recognition projects kept generating false positives (detecting facial images where there are none). “We asked ourselves, could a machine using an algorithm find the same faces in nature that a human would recognize?” Kiefer says. “We wanted to explore if this psychological phenomenon could be replicated in a machine.” To find out, the team created a two part system consisting of one computer running Google Maps and the other running a bot programmed with a facial recognition algorithm that simulates pareidolia. Functioning like a human Google Maps user, the facetracking bot autonomously clicks its way around the world, stopping to gather data whenever it comes across a landscape that resembles a face. Kiefer notes the computer most often tags locations when it spots dark images in a light environment. For example, a forest with trees casting shadows. “If you have two or three dark spots, it will always see that as two eyes and the shadow underneath your nose or mouth,” he explains. “That’s often enough for the algorithm to recognize a face.” The human facial recognition system is a little more discerning and complex. We’re able to recognize profile views, the outline of hair and the contour of chins in simple landscapes, but an image with too much noise (cities, dense forests and topographically complex landscapes) often doesn’t register with us. The bot has already latitudinally circled the world a few times, but the goal is for it to traverse the entire planet at every Google Map zoom level (there are 17) in order to get the most comprehensive data set. And at a speed of one snapshot analyzed per second, a round-the-world trip can be quite a trek depending on how zoomed in the bot is. Kiefer estimates they’ve only covered 5 percent of the world, which means there are a lot more faces to come. “We have a long way to go,” he says. “There are probably a lot of faces out there that we just haven’t found yet.” All images: Courtesy of Google Maps
Whether you're a righty or a lefty, parrots may be able to tell us why we've come to prefer one hand over the other. In a series of experiments, researchers watched as 322 parrots from 16 different species attempted to grab an object—a toy or a piece of food—with their feet. Since the birds' eyes are on the sides of their heads, they can't look straight ahead like we do; instead, they have to cock their heads to one side. But like us, individual parrots show a strong preference for one limb or the other: The researchers noticed that a "left-handed" bird would cock its head to the right to give its left eye a better view. It would then grab the object with its left foot, probably because this was the easier foot for the left eye to track. If early animals had eyes on the sides of their heads like birds do, their need to use either one eye or another to grasp objects may have led to the evolution of handedness, the researchers suggest online today in Biology Letters. See more ScienceShots.
This week, researchers from the National Centre for Atmospheric Science (NCAS) at the University of Reading published a paper suggesting that summer seasonal weather forecasting in the UK could become more accurate thanks to new research. This result is the latest in a long history of work on the links between Atlantic Ocean sea-surface temperatures and the jet stream since early work by the American researcher Jerome Namias in the 1960s and Met Office research by Ratcliffe and Murray in the 1970s. The research also extends earlier results on summer predictability from the Atlantic Ocean state (Colman and Davey, 1999). Commenting on this new research Professor Adam Scaife (Head of Monthly to Decadal Prediction at the Met Office) said: “Statistical empirical forecasts, like this, are an important tool in our goal of improved weather forecasting. Our computer models need to reproduce these important relationships so that they can integrate them with everything else going in the climate system to give the best weather and climate predictions. This avoids over reliance on a single effect and gives physically-based predictions in situations that we have not encountered in the historical record.” This new research will help the Met Office and its university collaborators define an important area of focus for testing computer models used for prediction, and we have already been examining its role in our long-range predictions. Professor Scaife concluded: “We’ve made great progress in long-range forecasting for winter, and this result highlights an exciting way forward to break into the long-range forecast problem for summer.” Citation: Osso, A., Sutton, R., Shaffrey, L., and Dong, B. Observational evidence of European summer weather patterns predictable from spring. Proceedings of the National Academy of the Sciences (2017).
Nikola Tesla was the genius who invented the electronic 20th Century in the late 1800’s. His greatest invention was the Alternating Current (AC), the electricity we use today. Some of his other inventions include Hydro Power Generators, Radio, Neon and Florescent lights, Radar, Tesla Coil (still used in cars today), Remote Control, and the Microwave. By 1886 Tesla realised the AC electricity that he had invented would prove to be detrimental to human health. Before his death he wrote papers detailing on how to counteract the harmful effects of his inventions then passed all his paperwork to Ralph Bergstressor, a young physicist, on how to counteract the biological effects of his inventions. Today, we have Tesla’s products that have been developed based on his work to free us from harmful electromagnetic fields (EMF) and electromagnetic radiations (EMR). Whenever an appliance is turned on, both an electric and magnetic field emanates, called electromagnetic fields (EMF). Electric fields stop just below the surface of our skin but the magnetic fields pass through the human body. Both types are detrimental to our health. Common sources of high EMFs in buildings are transformers, switchboards, all electrical cables, motors and appliances. The Common source of EMR comes from our mobile phones including all its variations, computers and all wireless technologies. All of these technologies bombard our bodies with harmful fields. The earth itself has many Geo-magnetic fields carried along what are known as Ley lines and many subterranean rivers or streams. We can be affected by these negative earth energies causing illnesses. This is called Geopathic Stress and it can come from ley lines, underground water or above ground high-tension wires or phone towers. The most common indication of Geopathic stress is resistance to treatment from either conventional or alternative therapies. Geopathic stress does not cause illness, but lowers your immune system and you’re ability to fight off viruses or bacteria. It is the continuous weakening of the immune system and disrupting metabolism on a molecular level that is detrimental to our DNAs. Common symptoms from these effects include unexplained fatigue, aches/pain, emotional oversensitivity, hyperactivity, aggression, miscarriages, allergies, some arthritic conditions, and cancer or secondary cancers. The human body is almost wholly electrical, chemical and mechanical. Tesla’s products work on harmonising the electrical components of the body thereby affecting the chemical and mechanical components of our body. The atmosphere also receives electromagnetic energies from the sun, planets or other heavenly bodies in the universe, some harmful while others healing. Light (called photo/tachyon energy) is an example of electromagnetic radiation that can be seen. Other examples are radio waves, microwaves, infrared, ultraviolet, X-ray or gamma rays. Tesla’s products are designed, like the pyramids of Egypt, to act like an antenna or transceiver of Light (photon/tachyon) directly from our Central sun (Alcyone). Tesla’s products are made of titanium as a carrier metal because it is a pure element, the crystalline structure is hexagonal, it’s non-allergenic and the molecular structure is similar to our bone structure. For purchase of any Tesla's Products, please Contact me for details. 1. Personal Pendants The pendant is worn over the thymus gland to help the immune system, strengthens your personal energy field, helps to stop harmful effects of EMF, aids in lessening sunburns, activates the lower brain frequencies for receptiveness and concentration. 2. Phone Tags Phone tags alter the chaotic pattern from EMF/EMRs in the transmission to and from the antenna of the phone to become biologically harmonious to the user and anyone nearby up to 9 metres away. A MUST for all mobile phones and variations such as iphones, ipads, notebooks etc. 3. Computer Plates Computer plates help relieve eye strain, alter stress coming from any desk tops, laptops, notebooks or wireless game machine. Another MUST for all computer/laptop users especially wireless. 4. House Plates The House Plate produces a bubble of energy stretching up to 1.5 acres. It helps with geopathic stresses from above the ground. It can also be used with reflexology treatments, treat drinking water and creates a calming influence as well as taking care of EMR’s. 5. Electron Stabiliser Electron Stabiliser produces a field of energy to alter the Alternating Current’s (AC) chaotic frequencies to flow electrons in a coherent manner, thereby becoming harmonious to the human body. All man-made electricals causes EMFs radiations are chaotic. The Stabiliser has unique titanium transceivers installed in a specific configuration inside a tamperproof container designed to receive and transmit photon/tachyon energy from the sun in order to do this. This is installed inside the house preferable attached to any electrical apparatus that does not switch off such as fridge, TV, sound system etc. Placing a Stabiliser inside will have effects on all AC current used by any other electrical equipment. Water containing minerals or salts are a perfect conductor of energy fields or electricity. The minerals or salts in underground water and in household pipes can attract electricity from power line leakage into the atmosphere. There are also other additives in our water such as chlorine, fluoride or iron oxide from corroded water pipes. Chlorine is positively ionised (molecules adhere to each other causing it to be partially inactive), giving a strong odour. The Water Kit changes the ionisation and separation of these molecules from positive to a negative ion, making it easier for the liver to eliminate out of the body. The water kit also removes memory of all other added chemicals, which does not happen with conventional filters. It puts life force back into the water, making it taste and feel like rain water or from a clear mountain stream. It also treats both magnetic and electrical fields in the water.
As already mentioned, in Burma the Zomi are known as Chin. It has since become a matter of great controversy how this terminology originated. In this respect many scholars advanced different theories. B. S. Carey and H. N. Tuck asserted it to be a Burmese corruption of the Chins word “Jin” or “Jen” which means man. Prof. F. K. Lehman was of the view that the term might be from the Burmese word ‘Khyan” which means ‘basket’, saying, “The term ‘Chin’ is imprecise. It is a Burmese word (khyan), not a Chin Word. It is homologous with the contemporary Burmese word meaning basket”. Implied thus is that the basket carrying inhabitants of the Chin Hills bordering the plain Burmans are Chin. But according to Prof. G. H. Luce, an eminent scholar of the early Burmese history, the term “Chin” (khyan in old Burmese) was derived from the Burmese word meaning “ally” or “comrade” in describing the peaceful relationship which existed between the Chins and the Pagan Burman in their historical past. His interpretation was based on the thirteenth century Pagan inscription. However, the same inscription also revealed the controversial slave trade along the Chindwin River. However, in the year 1950 the Burmese Encyclopaedia defined Chin as “ally”. This official publication was challenged by Pu Tanuang, an M.P. from Mindat (Chin State) in the Burmese Parliament. He criticized the Government for politicizing the name. The Revered S. T. Hau Go, a former lecturer of Mandalay University writes, “Whatever it meant or means, however it originated and why, the obvious fact is that the appellation “Chin” is altogether foreign to us. We respond to it out of necessity. But we never appropriate it and never accept it and never use it to refer to ourselves. It is not only foreign but derogatory, for it has become more or less synonymous with being uncivilized, uncultured, backward, even foolish and silly. And when we consider such name calling applied to our people as “Chinbok” (stinking Chin) we cannot but interpret it as a direct and flagrant insult and the fact that we have some rotten friends”. Whatever the case may be, from the above evidence it can be concluded that the word was coined by the Burmese and it was adopted by the British officials. Investigation and research, however, proves that such a word as “Chin” does not exist in the vocabulary of the Zomi. The people themselves do not use in their folksongs, poetry or language. Even today the name remains strange to the illiterate people of the countryside in the very region called Chin Hills in Burma.
Experimental Investigations of High Voltage Pulsed Pseudospark Discharge and Intense Electron Beams A high voltage pulsed discharge device which can produce intense high energy electron beam named "pseudospark" is presented in this work. This discharge device is able to hold 10s of kV voltage, kA current and 10¹⁰ - -10¹¹ A/s current rising rate. The pseudospark device is also a simply-constructed source for intense electron beam with high energy. The presented experimental investigation is focused on the discharge properties of pseudospark and the plasma-produced electron beam characteristics for current and potential applications in aerospace problems. The discharge property results show the presented pseudospark device has the hold-off voltages up to 26 kV and discharge current of 2 kA with current rising rate of 1 × 10¹¹ A/sec. And the comparative study on various discharge configurations show the capability of pseudospark device to hold voltage and high current generation in short pulse can be further improved by the device geometric configuration, leading to higher pulsed load drive capability. The intense electron beam obtained from the multi-gap pseudospark device has a current up to 132.2 A, and electron number is varied from 4 x 10¹⁵ to 2 × 10¹⁶ in the presented operation voltage range obtained from 10s of cm 3 charged particle channel. The energy analysis on this pseudospark-produced electron beam displays the "double-hump" non-Maxwellian energy distribution. The maximum energy peak value varies from 900 eV to 6.3 keV under 4 kV to 12 kV discharge voltage. Specifically, comparison of the beam parameters obtained from pseudospark device and the electron beam requirement for a MHD channel indicates pseudospark is a promising electron source. J. Hu and J. L. Rovey, "Experimental Investigations of High Voltage Pulsed Pseudospark Discharge and Intense Electron Beams," Proceedings of the 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition (2012, Nashville, TN), American Institute of Aeronautics and Astronautics (AIAA), Jan 2012. The definitive version is available at https://doi.org/10.2514/6.2012-789 50th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition (2012: Jan. 9-12, Nashville, TN) Mechanical and Aerospace Engineering Article - Conference proceedings © 2012 American Institute of Aeronautics and Astronautics (AIAA), All rights reserved. 12 Jan 2012
By Saj Sri-Kumar As you walk down the hallway of Chalmers on your way to your math class, you might hear a student cough many times in succession. At first it appears as if it’s just another student with a cold, but when he coughs so much that he is struggling to fill his lungs with air and takes deep inhalations, it becomes clear that it is something more serious: pertussis, better known as whooping cough. Pertussis, a highly contagious disease that is occasionally fatal, has infected seven times more people in California since June 1 than the same time period last year, resulting in the most cases in 52 years. As a result, the California Department of Public Health has declared a statewide epidemic. While often associated with infants, teenagers and adults are at a greater risk than before for two main reasons. First, many vaccinations that people may have gotten as young children have worn off and the protection that they afforded has weakened, pediatrician Cara Natterson ’88 said. Second, many adults and teenagers will go to work or school despite being sick to avoid having to make up work. Natterson said that this causes the disease to spread to peers, and she suggested that anyone who feels sick should see a doctor. “When people cough into their hand and then touch a doorknob or a desk, they leave bacteria on the surface; the next person to come along and touch the same can easily pick it up. We all touch our eyes, noses, and mouths throughout the day, often unknowingly. When we do this, we expose our bodies to bacteria and viruses that we have come into contact with in the environment,” Natterson said. “This is why hand washing is so important. If you eat a sandwich without washing your hands, you are ingesting all of the things you have touched in your community. If you wash with soap and water, then the germs from your surroundings don’t get into your body,” she said. Additionally, not every student has necessarily been vaccinated against the disease, even as a child. Harvard-Westlake’s Director of Sports Medicine Sandee Teruya said that while the school strongly recommends students receive the vaccination, it is not required. The school has announced that as a result of the pandemic, it will be offering the “TDAP” vaccine to all faculty at the annual Health and Wellness fairs, as part of the school’s health insurance for faculty. That vaccine will include protection to tetanus and diphtheria in addition to pertussis. There are three distinct phases of symptoms that one experiences once infected. The first is similar to the common cold in many respects, and consists of the same symptoms. This stage is also when the disease is most contagious. The second phase, also know as the “paroxysmal” phase, is characterized by periodic spurts of coughing, often followed by a deep inhalation. Natterson said that the deep inhalation often sounds like a “whoop,” giving the disease its popular name. The final phase is a gradual recovery, during which the cough slowly subsides.If one is diagnosed with pertussis, the disease can be treated using antibiotics. However, a recurring problem is that many people do not think they have pertussis and spread it to others. Additionally, many doctors do not think to test for it initially, Natterson said. “My advice is to have the doctor check you by listening to your lungs and doing a quick physical exam—this is always better than running to a lab and just getting a lab test. When you feel sick, and certainly when you have significant cough, an exam by a doctor is always a good idea,” Natterson said.
GSM is the innovation your telephone probably uses to interface with your cellular service supplier’s network. In fact, as of 2018, 90 percent of all PDAs in nearly 200 nations are based on this protocol. Created in 1982 and launched in Finland in 1991, GSM has defeated the some time ago popular CDMA (Code Division Multiple Access) to wind up the true standard for portable communication. It’s viewed as a 2G (second-generation) protocol, replacing first-generation analog innovation. GSM is an acronym that has come to stand for Global System for Mobile, however it first was named for the gathering that created it: Groupe Spécial Mobile. How GSM Works Physically, a GSM network consists mainly of associated gadgets, for example, gateways, repeaters, and relays (commonly called antennae). These are the ubiquitous, massive metal structures that stand as high towers. A GSM network is a cellular network; that is, it interfaces cells — the small areas secured by towers. Cell phones interface with the nearest cell, which thus associates with others; these associations give the communication and location services that we live with today. The cellular network also underpins 3G, 4G, and rising 5G advances, all of which carry data and give web availability. The SIM Card Each cell phone is associated with and identified on a GSM network through a SIM (Subscriber Identity Module) card, which is a small chip embedded inside the gadget. Each SIM card has a telephone number hard-coded into it, which is utilized as an extraordinary identifier for the gadget on the network. Individuals call and content you using this number. GSM and Voice Over IP (VoIP) GSM calls add a great deal to the average month to month mobile phone bill. Voice over IP (VoIP), however, can help cut the expenses for many individuals. VoIP bypasses the cellular network and channels the voice call as data over the web. This makes VoIP calls free or exceptionally cheap compared to GSM calls, especially for international calls. A few telephones presently allow you to set web calling as the default technique for voice association and regular GSM calling as the fallback, saving cash for both the endorser and supplier. Also, apps, for example, Skype, WhatsApp, Viber, LINE, BB Messenger, WeChat, and many others presently offer free calls worldwide for their clients, bringing about a decrease in the quantity of GSM calls being placed. VoIP has not possessed the capacity to beat GSM and traditional communication on reliability and voice quality, however, so GSM still rules with regards to cellular communications.
Block on Trump's Asylum Ban Upheld by Supreme Court Five more states have recently joined the ranks of requiring identification to vote at polling places: Arkansas, Iowa, Missouri, North Dakota, and Texas. Though federal law has always required first-time voters to show a photo ID, states have not always had the same requirement. However, in the past decade, states have increasingly passed legislation requiring some form of identification, whether it be a government issued photo ID, a utility bill, or merely a signed affidavit. For the upcoming midterm elections, 34 states now require some form of ID. At first, one might think "of course you should have to show a photo ID to vote. We've all heard the voter fraud stories about dead people's votes being cast." But it really isn't that simple, and unfortunately, the debate goes down party lines. Requiring Voter ID Is Helpful, but Will It Change Things? Republicans have pushed for photo ID laws since the party took control of the House and Senate in 2015. They claim that voter fraud is rampant, and with elections being decided by so few votes these days, every vote should be accurate. The concept of slim margin victories is one most people can agree on. However, there haven't been many voting fraud convictions, especially at polling sites. (Keep in mind, you don't have to show ID to drop your absentee ballot in the mail.) From 2002 to 2007, which was well before most voting ID laws went into effect, only 120 voter fraud cases were filed by the Justice Department. Many of these were found to be due to mistakenly completed registration forms or misunderstanding voter eligibility. Of the 120 cases, there were 86 convictions. Though we can all agree that elections can turn on the slightest margin, 86 votes over a five year period is practically a rounding error. How Many People Are Truly Being Disenfranchised? Democrats claim that requiring an ID is akin to a poll tax, because it is harder for certain people to obtain an ID. In rural area, some DMVs are only open once a month. ID's require birth certificates, which are difficult and expensive to procure. The elderly might not have a valid driver's license anymore, and it may be difficult for some to get to the DMV. Therefore, Democrats feel that voter ID disenfranchises eligible voters. However, a 2012 Reuters study showed that less than 1% of the voters that couldn't vote since they didn't have an ID would have actually voted had they had one. Meaning, the same groups that wouldn't get a photo ID are the same groups that have a tendency not to vote anyway. Therefore, although the poll tax/disenfranchised voter is a legitimate argument, it too would hardly amount to a rounding error. Just as Tevye in Fiddler On The Roof said, "On the one hand ... But on the other hand ... But on the other hand ...," both parties put forth a sound, yet inconsequential argument for voter ID. Perhaps the big winners in this are the states, which would make a lot of money issuing more ID cards. But the DMV doesn't really have excess capacity to service a huge influx of customers, as noted by the bad publicity currently surrounding the California DMV. And, as noted, if someone really wants to evade ID issues, just vote absentee.
Hej Guys, I have seen on wikipedia, that power = Force x velocity, and this is fine but when they explain the same for circular moving force then they use power = Force (x arm) x Angular velocity (rads/sec). This cannot be, the velocity actually cannot be represented by only angular velocity, it needs the radius as well. For example distance = velocity x time = angular velocity * radius * time. So in this way the equation power = Force x velocity (for linear moving force) and for circular moving force the power will be = arm x force x angular velocity x radius .. in circular considering that the force is always perpendicular to the arm. If not then power would be > force x sin (Angle of force-arm) x arm x angular velocity x radius. example considering the force and velocity as a point on a circle or a line... radius = 10m circle circumference = 62.83m 81 degrees/sec=1,4137 rads/sec=14.13675 m/s powerL = force * vel = 100N * 14.13675 = 1413.675 watts powerC = force * angular velocity * radius = 100N * 1,4137 rads * 10.0m = 1413.7 watts so out of calculation error PowerC = PowerL angular velocity != velocity so even in rotational systems it cannot be used alone without the radius. This way the power = torque x angular velocity is not true. To prove this like above one maybe divide the circle to line segments where the a value = force x arm x velocity x distance can be calculated for each segments, considering the torque, and these segments added will be much different from the torque x angular speed (rads/sec) x distance segments together. Naturally there will be some difference based on the number of segments. The more segments the closer it will be. Again torque x velocity != torque x angular velocity (rads/sec). The result is only same if the other equation is power = force x arm x angular velocity x radius = torque x angular velocity x radius. Am I right? Thx.
Ruby Millhiser, 4, looks outside the window of her home in Green Bay on Wednesday, Oct. 24, 2012. The Millheisers had to replace their windows for fear of lead poisoning after they learned Ruby had high levels of lead in her system. An estimated 535,000 young children in the United States have harmful levels of lead in their bodies, putting them at risk of lost intelligence, attention disorders and other life-long health problems, according to a new estimate released Thursday by federal health officials. The number shows lead poisoning affects 1 in 38 children ages 1 to 5, according to the report by the Centers for Disease Control and Prevention. "To the extent that Americans think this is a problem of the past, clearly this is evidence there is still a problem," said Rebecca Morley, executive director of the National ...
On 17 July 2010, Abba Seraphim, accompanied by Deacon Theodore de Quincey, attended an Ecumenical Service at Westminster Cathedral in celebration of the 400th Anniversary of the Martyrdom of St. John Roberts. John Roberts (1577-1610) was a Welsh gentleman, descended from the ancient British kings, who was educated at St. John’s College, Oxford, and studied law in London. Although raised a Protestabnt he converted to Catholicism and studied at the English College at Vallodolid in Spain. He was professed as a Benedictine monk as Brother John of Merioneth. Ordained a Catholic priest he became a frequent visitor to England where he celebrated the mass and ministered to persecuted Catholics in London, especially during outbreaks of the Plague. He was arrested and imprisoned on several occasions and deported but each time returned. For exercising his priesthood he was found guilty of high treason and executed at the age of 33 years. Commenting on the celebration Abba Seraphim noted that as a Londoner he wanted to honour the humanitarian and pastoral ministry of the saint to Londoners; and that all those who are conscious of the problems of exercising Christian ministry in times of persecution would immediately value the saint’s determination as well as realising the extraordinary sacrifice he made to fulfil his priestly vocation. Leading this eirenic celebration were the Catholic Archbishop of Westminster (Mgr. Vincent Nichols), the Archbishop of Canterbury (Dr. Rowan Williams), the Anglican Archbishop of Wales (The Most Rev’d Barry Morgan) and the Catholic Bishop of Wrexham (Mgr. Edwin Regan) with many other Catholic and Anglican bishops from Wales. Other Orthodox Churches were represented by His Grace Bishop Athanasios of Tropaeou (Oecumenical Patriarchate), Archbishop Elisey of Sourozh (Moscow Patriarchate), The Very Rev’d Archimadrite of the Oecumenical Throne Ephrem (Lash) and Archimandrite Deiniol, Administrator of the Wales Orthodox Mission (Ukrainian Orthodox Church within the Oecumenical Patriarchate). Large contingents from Wales were in enthusiastic attendance and the service was bi-lingual.
Insects are hard enough to identify because of their small size. But most insects are huge compared to mites. Larger mites may reach 1-2 mm in length; most are much smaller. Scabies mite, one of the only mite parasites to exclusively feed on humans, are among the smallest of mites (0.18-0.45 mm-long) and visible only through magnification. Most mites are less than 1 mm-long, though velvet mites (the largest of the mites) can reach lengths of 4 mm, as long as a termite worker. |Clover mites are distinguished by their long front legs. Photo | by Rayanne Lehman. For size reasons, I'll admit that I groan a little inwardly when I receive a mite specimen to identify. Mounting mites on glass slides is not one of my strong skills, and takes extra time. So I was pleasantly surprised this week when a promised mite sample arrived and it was actually on the big size, at least for a mite. I was initially ready to identify my client's mite as a clover mite, one of the common springtime mite pests in Texas. The clover mite, Bryobia praetiosa, is a reddish brown mite with very long pair of front legs, about 2X the length of the other legs. Though mostly harmless, the clover mite is an occasional nuisance pest indoors when it migrates from its normal feeding sites in grass and weeds outdoors through windows and under doors. But there was something about these mites that didn't quite fit the clover mite profile. |Balaustium (or concrete) mite. Photograph | by Lyle J. Buss, University of Florida The one-millimeter-long, bright red mites were being found all over decorative stone and concrete in a Tyler, Texas backyard. The homeowner thought he was being bitten, perhaps by mutant chiggers from you-know-where. But although chigger mites are also bright red, they are tiny (.3 mm) and barely visible to the naked eye. These were not chiggers. An online search led me to one of my favorite online resources, Bugguide.net. There I discovered a mite genus I had never heard of before. Concrete mites, Balaustium species, were a dead ringer for the specimens I had under the microscope. So called, because certain species in this genus are commonly found wandering on concrete sidewalks, foundations and walls, and stonework, Balaustium mites are apparently common throughout the U.S.; but I had never heard of them before. The University of Florida Entomology Circular Series (also an excellent source of detailed information about urban insects, especially plant-feeding pests) had a 1995 publication by W. C. Welbourn on Balaustium mites in Florida. According to the circular, at least one species of Balaustium is "commonly found in urban areas where they appear in large numbers on sidewalks and walls for a brief period during the spring and early summer. It is during this time that they sometimes enter homes and buildings and become pests." One of, if not the only, food source of these urban mites appears to be pollen. Balaustium mites have been found clustered in large numbers on anthers of flowers. And based on the numbers of times I've had to wash my car and BBQ grill of oak pollen in the past month, I'm imaging that these mites have been having a good time of it lately. This would also explain why the mites can be found on nearly any outdoor surface, including sidewalks and roofs--because pollen is everywhere. Some species of Balaustium are predators on other insects, others feed on plants. I was surprised, however, to read that there may be some association between our otherwise peaceful, pollen feeding, house-invading concrete mites and bites on people. An entomologist named Irwin Newell in 1963 reported four cases of human "biting" involving Balaustium, three of which were associated with structures. The evidence for the bites was very strong, including a sample submitted by entomologist who was bitten on the arm while working in an entomology museum (how dare they!). The curious thing about the story is that while Dr. Newell stumbled across several cases in a relatively short period of time, prompting him to predict this mite was a growing problem, it's been crickets (silence) since then. None of the major texts or reference books I scanned have even a mention of this family of mites, in spite of being common in the landscape. Even Ed Riley, Texas A&M University's crack assistant museum curator, was not familiar with these mites--though he admits, like me, to not paying too close attention to red mites he sees outdoors. Newell admits in his paper that he had no idea what species of Balaustium he was dealing with in his biting cases. And I don't believe anyone know what species are common in Texas. But I will be more interested and paying closer attention the next time I see my brick mailbox covered with red mites. [Request: I have little doubt that some of you get questions from customers about little red mites crawling on the sides of homes and sidewalks this time of year. I would be interested to know if you or anyone you meet believes they have experienced bites from these little critters.]
Victory in the Battle against Brucella: From bench to battlefield Let no one ever say that Allison Rice-Ficht, Ph.D. is anything short of tenacious. For years she has doggedly, tirelessly investigated some of the world’s most dangerous diseases and how to fight them in new and innovative ways. Now, after six years of focused research, Rice-Ficht, director of the Center for Microencapsulation and Drug Delivery at the Texas A&M Health Science Center (TAMHSC), her co-principal investigators and team are nearing completion on the first human Brucella vaccine. Brucella bacteria cause brucellosis in both humans and animals. In humans, brucellosis is a chronic disease characterized by high fever and incapacitation of the infected individual for several days. Thereafter, it recurs periodically. If left untreated it can induce cardiovascular and osteoarticular diseases, and can cross the blood-brain barrier to cause neurological symptoms. Because Brucella is considered a select agent by the Centers for Disease Control and Prevention, meaning that it is readily weaponized, the vaccine would be primarily used as a biodefense inoculation for military personnel. To develop a vaccine with military applications for humans, Rice-Ficht received a $2.6 million grant in 2007 from the Department of Defense (DOD), specifically the Military Infectious Disease Research Program, a part of the U.S. Army Medical Research and Material Command. Her co-principal investors on the DOD grant include her husband, Thomas Ficht, Ph.D., professor in the Department of Veterinary Pathobiology at Texas A&M University’s College of Veterinary Medicine and Biomedical Sciences, and James Samuel, Ph.D., chair of the TAMHSC College of Medicine Department of Microbial and Molecular Pathogenesis. Since 2007 the original grant has been renewed twice, once in 2009 and then again in 2011, for a total of an additional $1.6 million. It will conclude this December. As an international expert on Brucella and its infection of animals, Ficht has focused his research on genetically incapacitating the agent, supported by the NIH via the Western Regional Center of Excellence at UTMB, while Rice-Ficht and her team have concentrated on an improved delivery system to make the vaccine stable at room temperature and to promote safe oral ingestion. She calls it a “pocket vaccine,” and it will allow military personnel to carry capsules in their pockets for oral consumption in crisis situations – akin to a stick of gum. Before the pocket vaccine is ready for distribution, Rice-Ficht explained, an injectable form of the vaccine must precede the portable capsule version. “We are now addressing packaging issues so that the future ‘pocket vaccine’ may be distributed at room temperature and without the assistance of medical personnel,” she said. “We must first make sure the injectable form is usable and effective.” After more than 18 months, a patent application is now pending on the vaccine and vaccination studies in animal models are concluding. In recent animal studies, no side effects were detected, and the vaccine has even been successful in animal models that have been immunocompromised. Currently, under the DOD grant, the vaccine is being developed solely for human immunization, and Rice-Ficht hopes to have it in clinical human trials in the near future. It also shows promise for use in animals, primarily livestock like cattle and goats, which are among the most common carriers of brucellosis. Both Rice-Ficht and Ficht hope to maximize these animal-human parallels in support of Texas A&M’s One Health initiative. When the vaccine is ready for production, and through a joint effort between Texas A&M University, Texas A&M Health Science Center and other collaborators, INCELL Corporation in San Antonio will provide pilot-scale manufacturing. Large-scale manufacturing may be performed in the future by the National Center for Therapeutics Manufacturing (NCTM), the first multidisciplinary workforce education institution and biopharmaceutical manufacturing center, at Texas A&M University. “We see this large-scale production in the future with NCTM,” said Rice-Ficht. “Such collaboration maximizes the resources of the Texas A&M System.” And even though one part of Rice-Ficht’s work may be coming to a close, she keeps an eye trained to the future. Since that initial grant in 2007 she has worked to leverage the DOD funding for additional aid, including a prestigious Gates Foundation grant for $100,000 in 2009 for a related research project. Additionally, Rice-Ficht’s and Samuel’s work will continue thanks to a Defense Threat Reduction Agency (DTRA) grant to Samuel. The DTRA grant will support the continued research of Q fever, a bacterial infection that humans acquire after contact with infected animals or exposure to contaminated environments. Without missing a beat, Rice-Ficht adds that she will continue to contribute to ongoing projects with U.S. Army Medical Research Institute of Infectious Diseases for controlled release viral vaccines. Since 1984, Rice-Ficht has been a faculty member of the TAMHSC College of Medicine in the Department of Molecular and Cellular Medicine. In addition to being named a Texas A&M Regents Professor in 2005, Rice-Ficht currently serves as Associate Vice President of Research for the Texas A&M Health Science Center. Contributed by Lindsey Bertrand
BEIJING, Nov. 26 (Xinhua) -- China is scheduled to launch Chang'e-3 lunar probe to the moon in early December, marking the first time for a Chinese spacecraft to soft-land on the surface of an extraterrestrial body, an official said Tuesday. Chang'e-3 encompasses a lander and a moon rover called "Yutu" (Jade Rabbit). The lunar probe will land on the moon in mid-December if everything is successful, said Wu Zhijian, spokesman with State Administration of Science, Technology and Industry for National Defence. The Chang'e-3 mission is the second phase of China's lunar program, which includes orbiting, landing and returning to Earth. It follows the success of the Chang'e-1 and Chang'e-2 missions in 2007 and 2010. Chinese scientists have made technological breakthroughs for the Chang'e-3 mission, which will be the most complicated and difficult task in China's space exploration, Wu said. "More than 80 percent of the technologies adopted in the mission are new," he said. The mission will witness China's first soft landing and exploration on an extraterrestrial object, remote control of the lunar probe and deep space communications, Wu said. Scientists must ensure a timely launch because of multiple narrow windows of time. Different trajectory parameters have to be adopted quickly as intervals between the windows are very short, he said. Many technologies will be used to ensure the probe makes a soft landing on the moon's surface under low-gravity conditions, Wu said. The rover will separate from the lander to explore areas surrounding the landing spot, he said. The lunar program will also see breakthroughs in remote control between the moon and Earth and survival of the rover on the lunar surface. Technologies of high precision observation and control as well as lunar positioning will be used in the mission, which also includes experiments that would be extremely difficult to conduct on Earth's surface, he said. Backgrounder: Timeline of China's lunar program China names moon rover "Yutu" BEIJING, Nov. 26 (Xinhua) -- China has chosen the name "Yutu" (Jade Rabbit) for its first moon rover, after a worldwide online poll challenged people to come up with names. Li Benzheng, deputy commander-in-chief of China's lunar program, announced the name at a press conference on Tuesday. Full story China's Chang'e-2 lunar probe travels 60 mln km BEIJING, Nov. 26 (Xinhua) -- Lunar probe Chang'e-2 is more than 60 million kilometers away from Earth and has become China's first man-made asteroid, a spokesperson said Tuesday. Still in good condition, Chang'e-2 is heading for deep space and is expected to travel as far as 300 million km from Earth, the longest voyage of any Chinese spacecraft, Wu Zhijian of the State Administration for Science, Technology and Industry for National Defence (SASTIND) told reporters at a press conference. Full story
a belt of ionized hydrogen surrounding the earth at the outer limit of the exosphere. - geodesic dome Origin of geocorona Dictionary.com Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2019 The halo of far-ultraviolet solar light that reflects off the Earth's exosphere. The American Heritage® Science Dictionary Copyright © 2011. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Creatures found in DK’s Ocean The Definitive Visual Guide Hello fellow Marine Biologists! It’s Tuesday, and you know what means! Yes, it’s time to look at some more dynamic sea creatures. Now, let’s go for a dive! - Mandarinfish (Synchiropus splendidus) Size: Up to 2 1/2 in Depths Found: 3-60 ft in tropical waters of the southwestern Pacific The Mandarinfish is one of the most colorful of all sea creatures found in coral reefs. Its skin is covered in a special slime that has a distinctively bitter taste. This is used for defense against predators. The species is a very popular among aquariums, however they are quite difficult to maintain. - Dugong (Dugong dugon) Size: 8-13 ft Depths Found: Coastal Shallows Dugongs are incredibly weird creatures of the sea. They are blimp-shaped sea animals with crescent-shaped tails and broad heads. They feed on sea grass, and use their long snouts to find resources in the mud. Unfortunately, Dugongs are extinct around the Mediterranean. However, the population is thriving in Australia.
The Dermatology Nurses’ Association would like to make sure you are SunAWARE. Learn the acronym that can help you be safe in the sun. Remember, anyone can develop skin cancer anywhere on their body. Report new or changing skin growths or spots to your health care or dermatology provider. The Surgeon General’s Call to Action to Prevent Skin Cancer calls on partners in prevention from various sectors across the nation to address skin cancer as a major public health problem. Federal, state, tribal, local, and territorial governments; members of the business, health care, and education sectors; community, nonprofit, and faith-based organizations; and individuals and families are all essential partners in this effort. July is UV Safety Month The skin is the body's largest organ. It protects against heat, sunlight, injury, and infection. Yet, some of us don't consider the necessity of protecting our skin. So the US Department of Health and Human Services has put together some Sun Safety Resources and a Sun Safety Quiz to test your UV IQ. Click here to see them and take the quiz. Patient Support Groups The Dermatology Nurses’ Association makes no recommendation or endorsement of the organizations or resources listed. They are provided solely as a public service. Individuals must determine for themselves how to use the information provided. When making medical decisions, a physician should be consulted. Addresses and telephone numbers may have changed since this list was first posted.
Are we running out of useful ways to spend our research resources? Scientists have developed a system which enables people to stroke a chicken over the internet. It's seen as the first step to virtual physical interaction, reports Wired News. The Touchy Internet system was created by researchers at the National University of Singapore. Users touch a chicken-shaped doll which duplicates the actions of a real chicken through a webcam link. Touch sensors on the doll send 'tactile information' over the internet to a second computer near the chicken. This computer triggers tiny vibration motors in a lightweight jacket worn by the chicken, meaning the chicken feels the user's touch in the exact same place as the doll was stroked. "This is the first human-poultry interaction system ever developed," said Professor Adrian David Cheok who has been developing the technology for nearly two years. "We understand the perceived eccentricity of developing a system for humans to interact with poultry remotely, but this work has a much wider significance." Remote interaction could allow people who are allergic to dogs and cats to caress their pets remotely. Used in zoos, it may allow visitors to pat a lion or scratch a bear.
Electronegativity is a chemical property which describes how well an atom can attract an electron to itself. Values for electronegativity run from 0 to 4. Electronegativity is used to predict whether a bond between atoms will be ionic or covalent. It can also be used to predict if the resulting molecule will be polar or nonpolar. This information is available in periodic table form as well. This color periodic table shows the trends of electronegativity as you move around the table. Click the image for a full-size or download a PDF.
Heat Exchanger TypesHeat exchanger types include kettles, plates, tubulars, spirals and scraped surface units. Kettle heat exchangers are simply tanks with an outer jacket designed to contain heating or cooling media. The product is heated or cooled while being mixed, blended or agitated. However, kettles are neither thermally efficient nor continuous in operation, and product moisture content is often lost during processing. Gasketed plate heat exchangers consist of a number of corrugated metal sheets or heat transfer plates clamped together in a frame. The adjoining plates are spaced by gaskets, which form a narrow, uninterrupted space through which liquid flows. The fluids are separated by the gaskets and flow through alternate channels (passes). By arranging these channels in groups, and by further including intermediate separating/connecting plates, several fluid streams can be accommodated at once. Special tubular heat exchangers can be of several different designs. Double- or triple-pipe units consist of two or three concentrically mounted tubes. The heating or cooling medium flows through the inner tube; on a triple-tube arrangement, the medium can also pass through the annular space between the intermediate and outer tubes. Product travels in the opposite direction through the annulus between the two inner tubes or through the inside tube. The cost per square foot of heat exchange area can range from a low of $25 for basic plate types to as high as $1,400 for some scraped surface units (table 1). However, as with any equipment purchase, the initial capital investment is often far less important than the system's ability to meet the plant's goals and requirements. Evaluating the ApplicationAlthough price is often the first criterion evaluated by purchasing managers, other important selection criteria include: Table 2 outlines some basic application guidelines for the different heat exchanger types, and table 3 lists some design limitations. However, keep in mind that new equipment is continuously being developed to handle increasingly complex applications. Also, some products might require a specific type of heat exchanger to avoid adverse reactions during thermal processing. Ideally, the product should be tested in the unit before the purchase is made. Questions to ask include: How does the product react and taste? Did its color change? What is the general condition of the product? If the product is starch-based, has it bloomed properly? These characteristics all can be quickly ascertained once thermal processing is completed. Although one type of exchanger might be quite capable of handling the product, it might not perform as well or produce the same level of quality as another type in the same application. It is important to consider both near-term and longer-range goals when evaluating the different heat exchanger types. For example, will the plant eventually need to process a product with an acid-like pH? Will it ever handle higher-solids type products? Will any of the intended products exhibit chemical changes during processing? In many cases, a plant's future needs are unknown. However, any heat exchanger selected should run trouble-free at full production levels for the longest possible time. A heat exchanger also should work well in conjunction with other equipment in the plant, and it should be capable of being inspected and maintained with a minimum amount of downtime to the entire process (table 4). Making the Right SelectionMany applications require combinations of heat exchangers to properly produce the end product. This might involve a kettle for preheating, a plate for handling the “carrier” liquid and a scraped surface heat exchanger for the final chilling/deep cooling. By understanding the different types of heat exchangers and the requirements of the application, facilities at which process cooling is critical can select the right equipment for their application and optimize their investment. Nonviscous-to-Nonviscous Liquids (e.g., wine coolers) -- For high-temperature liquids, a plate exchanger with special gaskets or a spiral exchanger can be used, but these types might not meet the sanitary requirements of the application. A special tubular heat exchanger is appropriate but expensive. For high volumetric flow rates, pressures or temperatures, a shell-and-tube type can be used, particularly if carbon steel is suitable as a material of construction. Nonviscous Liquids to Steam (e.g., sugar solutions) -- A plate exchanger has a high heat transfer rate and is especially applicable with steam temperatures less than 270oF (132oC) with standard elastomers. A shell-and-tube type applies if it can be made in carbon steel or copper alloy. If high-pressure steam is used, a spiral or shell-and-tube heat exchanger is adequate. Viscous Liquids to Water or Steam (e.g., a corn syrup heater) -- Depending on the viscosity limit, a scraped-surface heat exchanger, special tubular or plate heat exchanger is applicable. If the requirement is non-sanitary, a shell-and-tube or spiral can be used. Viscous-to-Viscous Liquids (e.g., an oil/oil cooler) -- A high heat transfer coefficient and high turbulence due to even flow distribution are important criteria. A plate exchanger is the most efficient due to the turbulent flow it provides on both sides. However, plate heat exchanger regenerators are restricted to low viscosities. With high viscosities, a special tubular exchanger might be required. A spiral type can be used for liquids with low or medium viscosities because it provides good flow distribution for turbulent flows in the two single passages. However, the pressure drop must be sufficiently high to yield a velocity that creates turbulence. (The Reynolds number should be greater than 1,000.) Heat-Sensitive Liquids (e.g., a protein solution heater) -- The temperature and holdup time are the deciding factors; thus, small channel volumes, high heat-transfer coefficients and even flow distribution are important. Plate exchangers fulfill these requirements best. Spiral type and special tubular styles have the longest holdup times; however, strict temperature control is essential. Wall temperature and fouling considerations might be important with heat-sensitive or corrosive liquids. A scraped-surface heat exchanger with large diameter rotor shaft becomes the only solution when viscosities or solids prohibit the use of other types. Vapor Condensation (e.g., a steam condenser) -- If a stainless steel or high-alloy material must be used, a spiral type exchanger is often the best solution. If extensive and frequent manual cleaning is necessary, a plate exchanger (possibly a box condenser) can be used. A shell-and-tube exchanger is applicable if carbon steel can be used throughout, or at least for the shell. Cooling Water (e.g., using seawater as a cooling medium) -- Cooling water or seawater circulating in a heat exchanger as the cooling medium can best be handled by either plate or shell-and-tube heat exchangers if they are fabricated out of specialized alloys such as titanium, aluminum or bronze. High-Temperature Applications (e.g., a vegetable oil heater) -- High-temperature applications usually require custom-made heat exchangers because allowance for high thermal stresses is extremely important. A plate heat exchanger with a special compressed gasketing material can be used for some limited applications. Shell-and-tube and spiral exchangers also are suitable. High-Viscosity, Fouling or Crystallizing Applications (e.g., peanut butter, sauces, starches, gravies) -- Scraped-surface heat exchangers are the only solution because the heat transfer wall must be continually scraped clean.
Today I’ll show you how to draw an adorable / cute cartoon frog with basic geometric shapes and the number three shape. It is super easy to learn how to draw and we have broken it down into simple steps below. The step by step drawing tutorial is below … kids of all ages will love it. Happy Drawing! Learn How to Draw Cute Cartoon Baby Frog from Number 3 Shape Simple Steps Drawing Lesson for Beginners Written-Out Step by Step Drawing Instructions (Step 1) Draw an oval. (Steps 2 and 3) Draw #3 shapes. (Step 4) Draw curved lines. (Step 5) Erase some lines and then draw ovals for eyes. (Step 6) Draw more curved lines. (Step 7) Draw 2 lines for each arm. (Step 8) Draw curved lines for the ends of the arms.
Source: Oxford Dictionaries I had absolutely no idea what this word meant the first time I heard it. In fact, if anyone had asked me back then to guess the definition, I might have said that it probably had to do with slime or gooeyness of some sort. Of course, this seems like a rather silly guess when you consider that one of the earliest memories I can recall of hearing this word in context was watching Pain greet Hades as “Your Most Lugubriousness” in the 1997 Disney film Hercules. Eternally ablaze the god of the underworld may have been, but slimy he most certainly was not. But what else was he for sure? Dismal. When something is described as “lugubrious”, it has a mournful and gloomy air to it (e.g. lugubrious ballads). This definition comes from the word’s root in the Latin verb lugere, meaning “to mourn” or “grieve”. It’s worth noting, however, that of several different synonyms for the word “glum”, “lugubrious” is possibly the heaviest. While a “sullen” person is naturally ill-humored and “melancholy” is a somewhat chronic form of sadness, anything “lugubrious” is dismal to the point of exaggeration. This may explain why it’s one of my best friend’s favorite words; his sense of humor sometimes involves slightly melodramatic descriptions of his own melancholy observations! What are your thoughts on this word? Any suggestions for future “Word of the Week” featured words?
“The right kind of education while encouraging the learning of a technique, should accomplish something which is of far greater importance; it should help man to experience the integrated process of life.” J. Krishnamurti The areas of our Education Programs include 1) Sanctuary Schools 2) Teacher Enrichment 3) Design of educational resources for rural communities 4) Environment Education Sanctuary Schools – Education for children of the tribal communities The Sanctuary Schools were set up in 2004 specifically for children from 5 remote tribal villages located on the fringes of the Wildlife Sanctuary. (History of Sanctuary Schools) The intent of Sanctuary Schools is, to nurture a creative, intelligent and integrated human being.The educational program values compassion, culture, traditional knowledge, lives of the people and local ecology. The learning attempts to integrate ecological conservation and education, besides introducing traditional knowledge and livelihood skills. The sanctuary schools are Government recognized. We provide free education, mid-day meals, clothes, textbooks, notebooks and other school supplies, free of cost. (Features of Sanctuary School). The schools are run with community involvement. The education materials are created taking into consideration the local culture, traditions and practices. (How do we run the Sanctuary Schools ?) We have setup a Resource Center that trains the teachers, creates training materials & also manages a small library to share resource across these various schools. Since 2004, this project have made a difference in the lives of children of tribal community. This is truly inspiring and motivates us to do more. (Sanctuary Schools – Making a difference) Teacher enrichment, a very important component in the education program provides an opportunity for teachers from rural schools to interact with experts from different disciplines such as literary scholars, scientists, mathematicians and artists. KEEP conducts environment education programmes for students of schools and colleges. This programme provides an opportunity for students to live close to nature, participate in forest conservation activities and interact with local communities. The Center provides guidance for field studies and project work to individual students. This is an ideal place to conduct field studies in Ecology, Biology, Geography, Forestry, Conservation, Natural Resource Economics, Sociology and Anthropology. All this with the aid of our experienced staff and a growing Nursery and Germ-Plasm Bank. Students visiting Kaigal work on the land, learn nursery techniques, help in land care activities such as constructing rain water trenches, afforestation, seed collection for the germ plasm bank and so on.
The African Renaissance Monument (French: Le Monument de la Renaissance africaine) is a 49m tall bronze statue located on top of one of the twin hills known as Collines des Mamelles, outside of Dakar, Senegal. Built overlooking the Atlantic Ocean in the Ouakam suburb, the statue was designed by the Senegalese architect Pierre Goudiaby after an idea presented by president Abdoulaye Wade and built by a company from North Korea. It is the tallest statue in Africa. 5 Reasons George Washington Was Either Lucky or a Wizard | Cracked.com Declared the largest statue in the world in 1967,the Motherland Calls, also called Mother Motherland, Mother Motherland Is Calling, simply The Motherland, or The Mamayev Monument, is a statue in Mamayev Kurgan in Volgograd, Russia, commemorating the Battle of Stalingrad. It was designed by sculptor Yevgeny Vuchetich and structural engineer Nikolai Nikitin. Marcus Aurelius AD 176. One of the great bronze sculptures of all time and influenced royal representations down to our own time (St. Gaudens' Sherman statue readily leaps to mind.)
First Steps: Historic Church Restoration The beginning of a historic church restoration or new design requires research of many kinds. This blog explores the first major steps that propel the project forward. The first two, Paint Investigation and Plaster Survey are the combined efforts of on and off site research and analysis. Both of these studies survey the walls, the condition, the history, and stability. The mockup sheds light on what the space can become by choosing an area in the church to introduce the original or new design: color schemes and patterns. THE FIRST STEPS IN RESTORING A CHURCH’S DECORATIVE PAINT A paint investigation, as the name betrays, analyzes the layers of paint. Many churches were painted solid colors, often grey or white, in the 1970s, this process allows us to uncover the decorative paint schemes and colors that lay beneath the whitewash. The lack of decoration combined with a complex structure can actually weaken and distract from the architecture. Design is meant to complement and support the architecture. Without design, unique features in the building are lost. To further understand the basics behind the relationship of architecture and decoration read Owen Jones’s Grammar of Ornament. THE FIRST STEPS IN RESTORING PLASTER IN A CHURCH The plaster survey considers the condition of the plaster throughout the building. With access to the attic, we can determine through an analysis of the plaster its condition, age, material, and whether repairs need to be made. A plaster survey is an analysis that determines the cause of the failure in order to explain the effect. The survey also provides the means to establish an efficient plan of action to properly repair the damage. The mockup provides a look into the future, truly it is foreshadowing the beauty that is to come. In either a side chapel or an aisle bay, we can make a full scale, on site model of the proposed design scheme. This process allows the church to “try on” the design, encouraging curiosity and excitement among the parishioners. Mockups provide a roadmap for budgets, fundraising, and a window into the final product. Witnessing the mockup being painted on site demystifies the process of restoration allowing the parish to understand the creative process in action. WHERE DO YOU STAND IN THE PROCESS? Asking yourself a few simple questions can be a great way to determine where you stand in the restoration process, and which services your church might benefit from If the church is solid grey or white and built before 1950, chances are, there are decorative paint schemes and designs beneath the whitewash. If you are interested in repainting the church, this is an excellent opportunity to better understand the church’s decorative history. With the discoveries from a historic paint analysis, the church could be restored to its original interior or a new design can be developed based on the original. If there are cracks in the ceiling plaster whether the causes be water damage, time or unknown, the plaster may be failing. If the church has in the past or is planning on updating HVAC, lighting or sound systems, the plaster may be compromised due to the stress of new equipment. If plaster falls into the space below, the plaster is most certainly failing and a risk to those who frequent the building. If you’re planning a decorative paint campaign, it is in your best interest to inspect the plaster before embarking on endeavors dependent on the stability of the plaster. If there is uncertainty of how the colors and designs will look in the church, a mockup is a cost efficient way to visualize the completed effect of the project. If there is need of fundraising, a mockup acts as an excellent way to engage parishioners in the project. The mockup gives the congregation a hint of what might become and encourages the project into production. If unsure, most conservators would welcome an initial conversation to guide you in the right direction. Conducting a study at the beginning of a project, before other work has begun, can often save you money in the long run by helping you anticipate unseen issues and challenges, avoid costly mistakes, and understand your structure on a more holistic level.
This is a non-standard measurement activity. I use it to review with my students before going into our measurement activities! Enjoy!... Great and cute Math measurement activities if we have time after all the required stuff!! :) Measurement: Students place wiggly worms in order according to length. Pre-cut strips of green construction paper in various lengths and have the children pick a piece. They cut out and decorate their worms any way they want. Put them in order as a class. Class Messages. This one is for Measurement and planting seeds - Jack and the Beanstalk. You could write messages and sign them from different characters/animals to fit your weekly theme/story...Brown Bear, Pete the Cat, Little Quack, Goldilocks, etc.
By Robert R. Thomas With an engaged group of 25 in the basement of Flint Public Library recently, Hubert Roberts led a conversation about “The New Jim Crow,” both Michelle Alexander’s eponymous book and the reality. The conversation was part of the Tendaji Talks series, sponsored by Neighborhoods Without Borders, whose focus is systemic racism. Roberts, a Flint educator, mentor and minister, opened the conversation with the proposition that the American justice system is not broken, as many critics suggest; instead, he said, it works exactly as designed by our founding fathers. He backed his claim by reading from the Declaration of Independence. When he finished reading, the group confirmed through vocal feedback Roberts’ assertion that the focus of the founding fathers was on property and white men who owned it. The few controlled the many, he said. “America is a business,” said Roberts, “and the business is white and male.” White landowners took precedence over people without property. Women could not vote. Slaves were property, not people. So much for ‘all are created equal’ and equally protected by government. “We live in a culture of lying,” said Roberts. “History is critical to understanding American culture.” He then laid out some American history. After the Emancipation Proclamation, Roberts explained, slaves may have been freed, but there was no equality, no compensation of any sort, nor any jobs. After Reconstruction, from 1877 until the mid-1960s, Jim Crow laws and customs prevailed to legitimatize anti-black racism. In 1896 the U.S. Supreme Court helped undermine the Constitutional protections of blacks with its infamous Plessy v. Ferguson decision which legitimatized the Jim Crow laws and the Jim Crow way of American life. Its foundation rests on the premise that whites are superior and discrimination against blacks is acceptable, Roberts continued his history lesson by noting the derivation of the term Jim Crow. “Jump Jim Crow” is a song and dance from the early 19th Century performed in blackface by a white comedian who performed all over the country as “Daddy Jim Crow”. The song may have been inspired by the song and dance of a physically disabled African slave named Jim Cuff or Jim Crow. However it all came to be, the fact is that by 1838 the term “Jim Crow” and the mockery of blackfaced minstrel shows presented African Americans in the less-than-equal light lie of “separate but equal.” Segregation reigned. Another needed historical enlightenment, according to Roberts, is that the New Jim Crow is the Old Jim Crow. “They just changed the names in the New Jim Crow,” he said. “No matter the Crow, the reality remains segregation de jure.” Roberts then ran a litany of systemic “new Jim Crow” operations that he suggested parallel the “old Jim Crow” caste system: * law and order * get tough * war on drugs * disposable people * dividing the poor and the working class via fear and resentment * mass incarcerations He pursued the topic by playing a video of part of a conversation between Bill Moyers and Michelle Alexander about her book The New Jim Crow – Mass Incarceration in the Age of Colorblindness. “To fully understand what’s happened in this country,” said Alexander, “look back at least 40 years to the law and order movement that was born in the midst of the civil rights movement. “Civil rights activists were beginning to violate segregation laws, laws they felt were unjust….Segregationists said this was leading to the breakdown of respect for law. But then this law and order movement began to take on a life of its own….The Get Tough movement and the War on Drugs were a backlash against gains of black Americans in the Civil Rights movement.” Alexander said that a major result of such policies has been mass incarceration of a scale unknown in human history. She added that the majority of those incarcerated in America are impoverished people of color who, once they are swept into this justice system, lose whatever gains persons of color had made during the Civil Rights Movement. She noted that there are more people incarcerated today than the four million slaves emancipated after the Civil War. “Today there are over seven million people in this country under some form of the justice system,” added Roberts. “Why are we in this caste system today?” he asked at the conclusion of the video. “Mitch McConnell done said, ‘We gonna do all we can to make sure Barrack Obama will be a one-term president.’ “In the interests of this country, even if you are from many different political parties, you should not want your president to fail. That’s insane,” Roberts asserted. What is fueling the New Jim Crow caste system is what fueled the Old Jim Crow system, he contended. “Back to what I said earlier, America is a business,” he said. “This country was founded, and was taken from people that were already here to develop business. It’s always a commodity. How can I exploit it? The concept of capitalism, guys, is I can exploit those that have no power.” He emphasized that the few who have power control those other people in that environment as evidenced currently by the mounting police shootings of unarmed black men and mass incarcerations of people of color. Roberts concluded with a briefing on prison labor, the contemporary plantation producing product for private companies. “The prison systems today are on the Fortune 500. Michael Jordan has stock in prisons. And all you guys who have 401ks, many of your pension funds are in stock in prisons….Right now you have prisons across America that are making products for IBM, Motorola, Compac, Honeywell, Microsoft, Boeing, Nieman Marcus, Victoria Secret, Whole Foods, Sears, Walmart and more. “So what’s happening is you have people in prison that are working making less than twenty-five cents a day that are producing products that could be jobs for people that are out in the community. The answer to this prison industrial complex is to close the prisons.” Roberts wrapped up his presentation by stating: “Basically Jim Crow law means white people have maintained their power by any means. And history, one thing about history, it does show us how, unless we are committed to work together to change some things, things will be repeated.” The Tendaji Talks continue next month on the first Tuesday and third Thursday of the month at 6P.M. at Flint Public Library. Robert R. Thomas can be reached at [email protected].
Learn something new every day More Info... by email Interview puzzles are used by some employers as a mechanism for screening job applicants. The puzzles usually consist of riddles or mathematical problems that are intended to test the logical and quantitative skills of applicants. In some cases, puzzles are used to determine how an applicant tackles difficult problems and functions under stress. Interview puzzles are more common in the technology industry and may be used to screen out applicants both before and during the interview process. Employers use interview puzzles to determine if an applicant is suited to the specific tasks of a job. If a software company receives a large number of applications from software developers, for example, it might screen out applicants by asking them to solve a specific programming puzzle. Those who successfully complete the puzzle usually move on to solve a more difficult question, eventually leading to a smaller number of applicants who are then granted in-person interviews. During an interview, an employer might use a puzzle to determine how an applicant goes about solving problems. How quickly the applicant solves the problems and the steps he takes to do so can be important signs of suitability for the particular position. Some employers and interviewers believe that interview puzzles are a more accurate indication of an applicant’s skills and abilities than information on a resume or answers about past accomplishments. Interview puzzles are sometimes used to gauge how an applicant deals with situations in the workplace. Being asked to solve an ambiguous riddle, for example, may be less about arriving at the correct answer and more about how an individual deals with uncertainty. In other cases, asking an applicant to solve a highly complex puzzle under time constraints might be the employer’s way of assessing how well an applicant manages a high-pressure situation. Typical interview puzzles vary depending on the specific industry, employer and interviewer. An interview for a research position might ask the applicant how many phlebotomists there are in the world and how he would go about determining the answer. An electronics manufacturer might ask the applicant how he would program the television remote control to also turn on another home appliance. An effective response to an interview puzzle requires reliance on inherent strengths and acquired skills. People generally think more clearly and perform better when relaxed, so adequate rest prior to the interview is recommended. In cases where the interviewer’s intent is to gauge how an applicant deals with trick questions or stressful conditions, tuning into the interviewer’s intent is important for delivering a genuine and effective response. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Firstly, speak to the child using simple words or by showing pictures or using simple sentences. (this depends on the child’s age as well) Gradually, let the child repeat these words or sentences. This will help the child to start having conversation in English. Secondly, read short stories. For beginners, you may start with short and simple to understand stories. Keep reading these stories a few times. The more the child keeps hearing these words, the more words they begin to understand. There are no shortcuts in teaching children to read. The ability to understand the language is important. Parents and Educator must focus on this aspect as well. Have you wandered how babies learn? Firstly, when a baby is born, the baby only listens. Even though babies can’t speak, as they grow older they understand or rather sense what we are saying. In other words, it is the ‘natural instinct’ for babies to ‘figure out’ the meaning of the words heard. That is how babies or toddler understand the meaning of simple words like “come here”, “don’t go there” and “eat this” so on. When learning to read, listening skills are very important. Listening skills will help to develop the child’s ability to understand the language. Once children understand the language, reading is then very easy. They then learn to make sounds and gradually they would try to repeat the words they have heard before. With this they learn to speak. Thereafter, they can recognize pictures, letters in an alphabets or words. In other words, a child’s natural learning sequence are as follows :- Listen (then) Speak (then) Read. In learning to read, this sequence is equally important. The best approach in teaching children how to read is by combining both the Phonics and Look & Read method. In ‘Mom, I Can Now Read’ program, both Phonics and Look & Read method is used to teach children how to read words. However, words alone do not make any sense. Hence, to make reading interesting and easy, these words are used in Stories & Rhymes. As children gradually learn to read, the speed at which they read is important. Reading fast will help children to understand what they are reading. To help children read fast, Word Recognition and Rapid Read activities are gradually introduced. Phonics does play an important role in learning to read. However, to get better results, it is best to combine with other methods.
Jesus only gave us two commandments, and both of them were positive. The reference is, of course, to Jesus’ reply to the ‘lawyer’ who was ‘testing’ him. The passage comes in all three Synoptic gospels, though in quite different places. |Matthew 22:34-40||Mark 12:28-31||Luke 10:25-28| |Hearing that Jesus had silenced the Sadducees, the Pharisees got together. One of them, an expert in the law, tested him with this question: “Teacher, which is the greatest commandment in the Law?”| Jesus replied: “‘Love the Lord your God with all your heart and with all your soul and with all your mind.’ This is the first and greatest commandment. And the second is like it: ‘Love your neighbor as yourself.’ All the Law and the Prophets hang on these two commandments.” |One of the teachers of the law came and heard them debating. Noticing that Jesus had given them a good answer, he asked him, “Of all the commandments, which is the most important?”| “The most important one,” answered Jesus, “is this: ‘Hear, O Israel: The Lord our God, the Lord is one. Love the Lord your God with all your heart and with all your soul and with all your mind and with all your strength.’ The second is this: ‘Love your neighbor as yourself.’ There is no commandment greater than these.” |On one occasion an expert in the law stood up to test Jesus. “Teacher,” he asked, “what must I do to inherit eternal life?”| “What is written in the Law?” he replied. “How do you read it?” He answered, “‘Love the Lord your God with all your heart and with all your soul and with all your strength and with all your mind’; and, ‘Love your neighbor as yourself.’” “You have answered correctly,” Jesus replied. “Do this and you will live.” There are things to note about the differences here. As is common, Mark’s account of the opening dialogue is longer and more detailed than either Luke or Matthew; Mark includes the introduction to the Shema from Deut 6.4 that Jesus quotes, and Jesus goes on to commend the ‘lawyer’ and note that he is ‘not far from the kingdom of God.’ [We need to note the quite different sense of ‘law’ and ‘lawyer’ here; we are looking at a dispute about religious texts, and debates between the religious ‘experts’; and the ‘law’ was the first five books of the Bible, much of which was narrative.] Luke has interpreted this, possibly for an audience less familiar with Jewish theological terms, into the promise that ‘you will live’, though has the answer on the lips of the questioner rather than Jesus. Both Matthew and Luke interpret the question as somewhat negative, whilst Mark’s interpretation is more positive. The second thing to be aware of is that the request for a summary of the law has some very clear parallels. In Jesus’ day, two of the main rabbinical schools were those of Hillel (first century BC) and the later Shammai (50 BC—AD 30). Hillel and his school were generally thought to be more relaxed and open in their thinking, whereas Shammai and his school were often more rigorist—and so Jesus is often compared with Hillel in his approach. One famous account in the Talmud (Shabbat 31a) tells about a gentile who wanted to convert to Judaism. This happened not infrequently, and this individual stated that he would accept Judaism only if a rabbi would teach him the entire Torah while he, the prospective convert, stood on one foot. First he went to Shammai, who, insulted by this ridiculous request, threw him out of the house. The man did not give up and went to Hillel. This gentle sage accepted the challenge, and said: “What is hateful to you, do not do to your neighbor. That is the whole Torah; the rest is commentary—go and study it!” (It is worth noting that with regards to ethical teaching, Jesus is often more in agreement with the school of Shammai, the most striking example being that of divorce. John Ortberg summarises David Instone-Brewer’s take on this on beliefnet.com) It is important to spot what Hillel is doing here. He is not telling the would-be convert that there is only one commandment and that is all he needs to know. Instead, the man needs to go away and study Torah—but now knowing what it is fundamentally about, so that he does not fail to see the wood for the trees. There is, we might say, a mutual interpretive dynamic at work. If I want to make sense of the individual commandments, then I need to know the big picture that they are building into. But if I want to live out the big picture, I do need to study the individual commandments and the detail. There seems to be something similar going on in the teaching of Jesus. It always strikes me as odd that so many read individual commandments of Jesus as if they were just features of an interesting text, and not the product of a mind that had a coherent and integrated outlook. Of course, Jesus offers us many commandments, not just two (‘turn the other cheek’, ‘bless those who persecute you’, ‘do not worry’, ‘do not judge’ and so on), so the question is: how does his summary of the law relate to his other teaching? Philip Jenson, tutor in OT at Ridley Hall in Cambridge, offers an interesting parallel in his assessment of OT law in an earlier Grove booklet How to Interpret OT Law. Christians have often distinguished OT law under three headings—the moral, the civil, and the ceremonial—and we find exactly this division in the XXXIX Articles of religion. But such a division doesn’t work very well. One difficulty with this view is partly that such a threefold classification is not found in either the Old or the New Testament. On the contrary, the Old Testament often juxtaposes very different kinds of law. Within the one chapter of Leviticus 19 we find an interweaving of laws about sacrifice (ceremonial law), idolatry (religious law), false dealing (civil law) and love for neighbour (moral law). The Sabbath can be classified as civil, ceremonial and moral, and recent discussion about the special nature of Sunday shows that these aspects cannot be easily distinguished. Moreover, the Sabbath is not even Sunday, for the Jewish Sabbath takes place from Friday to Saturday evening. Another difficulty is that applying this threefold distinction makes it difficult to learn what the Old Testament law has to teach us about politics or ecology or worship—how leaders are to behave, how we are to treat the earth, and how we are to draw near to the presence of God. (p 5) Instead, Jenson proposes a different kind of three-fold classification, according to the ‘level’ of commandment: At the highest level the Shema seeks to address the underlying attitude of those who are being called to confirm the covenant that is being renewed. At the lowest level there are the multitude of commandments in Deuteronomy 12–26 which deal with more specific cases and circumstances. There is, however, another set of commandments that sits between the one and the many—the Ten Commandments. These are distinctive in form, being mostly negatively stated, terse, inclusive, foundational and in list form. The Decalogue is here given an abiding authority and scope, while the statutes and ordinances are more tied to the immediate context. We can compare the notion of ‘middle axioms’ in Christian social ethics, which were intended to sit between norms and situations and provide a bridge between the two. The specific laws may be understood as exploring how the Ten Commandments can be applied to Israel’s behaviour in the land that they are about to possess. I find it helpful to imagine the one, the ten and the many as comprising three levels in a triangle of graded number, generality and importance. The relationship between the levels is analogous. As one scholar has suggested, ‘The Shema is to the Decalogue what the Decalogue is to the full corpus of covenant stipulations.’ What this means is that we need all levels of law—and we need to read each level in the light of the other. We must remember what the overall goal of the law is—the love of God—to make sense of the details. But we also need the details in order to know what the love of God actually looks like in practice. This is exactly the dynamic we find in Jesus’ teaching. In Luke’s account of Jesus’ summary, his interrogator famously goes on to ask ‘Who is my neighbour?’, and in response Jesus tells the story of the ‘good’ Samaritan. (The shock of the story, by the way, is not so much that my neighbour belongs to another tribe or religious group, so that I have to cross boundaries to show love [though that is true], but more that it is one who belongs to another tribe who actually understands what neighbour love looks like.) In the conversation, we find Jesus moving up and down the levels in Jenson’s triangle—up to summarise the law, and down again to particularise it. We need both—when looking at the particular we need to know the overall goal, but we need to be told how that goal works out in the particular. Why? Because, as Augustine explored long ago, all our attempts at love are disordered: But living a just and holy life requires one to be capable of an objective and impartial evaluation of things: to love things, that is to say, in the right order, so that you do not love what is not to be loved, or fail to love what is to be loved, or have a greater love for what should be loved less, or an equal love for things that should be loved less or more, or a lesser or greater love for things that should be loved equally. (On Christian Doctrine, I.27-28) And of course ‘love’ always depends on what the object of love is, how it is loved, and how it sits in relation to other things that are loved. Because we are fallen, it is simply impossible to say ‘Love is all’ and leave it without qualification. Wolfhart Pannenberg wrote about this in his comment on the current sexuality debates (translated by Markus Bockmuehl for the Church Times): Can love ever be sinful? The entire tradition of Christian doctrine teaches that there is such a thing as inverted, perverted love. Human beings are created for love, as creatures of the God who is Love. And yet that divine appointment is corrupted whenever people turn away from God or love other things more than God. We must take seriously, in understanding what love means, Jesus’ specific commandments including Jesus’ adoption of and continuity with Old Testament relational ethics. Jesus did give two positive commandments—but a whole lot more besides. In our reading and ethics, let us not divide that which God has joined together. Follow me on Twitter @psephizo Much of my work is done on a freelance basis. If you have valued this post, would you consider donating £1.20 a month to support the production of this blog?
Amy Lin of Saffronart explores the whimsical world of Joan Miro’s Lithographs New York: Refused to be pigeonholed into one movement, Joan Miro (1893-1983) is remembered as one of the most influential Spanish artists of the 20th century. With an artistic career spanning almost a century, Miro saw the rise of Surrealism and Fauvism, and influenced generations of Dadaist and Abstract Expressionists. Miro’s own works evolved over the decades, becoming more abstract and imaginative through time. Miro experimented with lithographs at the most mature stage of his career. By the 1940s, Miro had fostered a distinctive style with dark outlines, organic shapes and bold colours that evoked a sense of sophistication and innocence. This style also used automatism (a Surrealist concept of drawing images from the subconscious) in rendering subjects. Miro would often lie awake in bed at night, sometimes not having eaten all day, and let images come to him in this dark and dreamlike state. In the morning, he would quickly jot them down. In its collection Dali to Damien Hirst, Saffronart features three signed, limited edition prints by Miro that beautifully illustrate his style and imagination. Joan Miro by James Johnson Sweeney is an example of Miro’s combination of childlike drawings with dramatic black outlines that dominate and dwarf the colours. Many of Miro’s influences came from his beloved Catalonia, where geometric forms meet nature, flora and fauna. The Enchantment of Variation’s in Miro’s Garden is the artist’s homage to his love for nature and its mysteries. Miro’s attraction to printmaking partly came from his enthusiasm to collaborate with other artists. He rejected the solitary nature of painting and embraced opportunities to work with other artists and artisans to enhance his creative repertoire. Besides artists, Miro collaborated with poets and curators to transform his prints into posters and book collections. Exhibition Miro at the Galerie Maeght 1978 to 1979 is a lithograph created to promote a show of his work at Galerie Maeght in Paris. The print was also made into a much larger poster (160 x 120 cm) and featured on the cover of Miro’s Catalogue Raisonne of Graphic Works Volume VI. Miro himself saw endless possibilities in his lithographs. He stated, “A painting is a unique example for a single collector. But if I pull seventy-five examples, I increase by seventy-five times the number of people who can own a work of mine. I increase the reach of my message seventy-five times.” Like many great artists, Miro wanted to share his beautifully mysterious universe with as many others as he could.
New Delhi: India saw the output of its workforce decline seven per cent — equivalent to the loss of 75 billion man hours — last year due to heatwave conditions, the 2018 Lancet report on health and climate change said on Thursday. The figure is almost four times that of China and a little less than half of the 153 billion manhours lost globally in 2017. It said the Indian government and related public health agencies must identify “heat hot-spots” through appropriate tracking and modelling of meteorological data and promote the timely development and implementation of local heat action plans, with strategic inter-agency coordination and a response which targets the most vulnerable groups. Urging a review of existing occupational health standards, labour laws and regulations for worker safety in relation to climatic conditions sector-by-sector, the report also asked India to reduce carbon emissions and air pollution, particularly from coal, for the sake of public health. Globally, 157 million more vulnerable people were subjected to heatwaves last year than in 2000, and 18 million more than in 2016. China alone lost 21 billion hours, the equivalent of a year’s work for 1.4 per cent of its working population. Rising ambient temperatures are placing vulnerable populations at increased risk across all regions of the world. Heat greatly exacerbates urban air pollution, with 97 per cent of cities in low and middle-income countries not meeting WHO air quality guidelines. Heat stress, an early and severe effect of climate change, is commonplace and the health systems are ill-equipped to cope. (IANS) Also Read: National News
by Dan Slayback, NASA Research Scientist aboard the SSV Robert C. Seamans / KINGDOM OF TONGA / What a week! Having just finished an expedition to Earth’s newest landmass, Hunga Tonga-Hunga Ha’apai (HTHH) in the Kingdom of Tonga a few days ago, I thought I’d write a few thoughts on this latest expedition to Earth’s newest landmass. Shortly after the volcanic eruption that constructed this new island began in December 2014, we were alerted at NASA’s Goddard Space Flight Center, in Greenbelt, Maryland, and initiated collection of relevant satellite imagery. Closely following this over the next several months, we observed rapid erosion of the southern coast due to oceanic wave action, at one point breaching the crater wall and opening the crater lake to the sea. Based on observations to this point, we expected a relatively rapid and possibly complete disappearance of the new island, perhaps within months or at most a few years. But instead, the island has held on! In mid-2018, with the island just over 3 1/2 years old, I was extremely fortunate to be invited to join a leg of the Sea Educational Association’s SEA Semester/SPICE (Sustainability in Polynesian Island Cultures and Ecosystems) program cruise through the southwest Pacific that passes conveniently close to HTHH. That exploratory visit, one year ago, was extremely valuable to let us get our feet wet (figuratively and literally) in understanding the island system from the ground, instead of solely from a satellite vantage point hundreds of miles in space. We made many useful observations, collected some good data, and gained a more practical human-scale understanding of the topography of the place (such as that the adjacent pre-existing islands, and their rocky shorelines, are almost fortress-like in their inaccessibility). We also saw things not accessible from space, such as the hundreds of nesting sooty terns, and details of the emergent vegetation. My return this year was to extend and improve the observations we made last year, and to lay the groundwork for continued and new observations into the future. A significant advantage of traveling with SEA is the small army of 26 energetic undergraduates on board the ship, willing and able to help accomplish a wide variety of tasks we set for ourselves; without their help, much of what we accomplished would simply not have been feasible. The core goal of these field expeditions is to improve our understanding of the island’s brief evolutionary history and likely future. The island was formed by a surtseyan eruption, which is a relatively modest explosive eruption (compared to say, Mt St Helens or Mt Pinatubo) occurring in shallow waters. They are relatively common along the active Tonga trench (just over the past few days here, a smoke eruption has been reported further north in Tonga, sending plumes over 15,000 feet into the sky, and a magnitude 5.2 earthquake was reported to the east). But it is less common for such eruptions to construct stable landmasses that survive for more than a few months. Over the past century, only two other surtseyan events have resulted in lasting edifices: Surtsey island in Iceland (erupted in the late 1960s; the type event), and Capelinhos on Faial in the Azores (mid/late 1950s). In Tonga, there are several examples of such eruptions forming short-lived islands over the past century, with the most recent erupting from the same submarine caldera as HTHH in 2009, only 1-2 km from the current cone; it washed away within half a year or so. The current cone may be persisting perhaps due to a larger volume of material ejected (giving it more time to stabilize before the oceanic wave action and pluvial (rain-caused) erosion erode it away), or perhaps its position between the pre-existing islands has provided a level of protection against oceanic wave erosion. In any case, the appearance of a new landmass (approximately 190 hectacres or 475 acres in size) has presented the unique and rare opportunity to study a rapidly evolving landscape from space, while observable change is occurring over relatively short periods of time (months to years). One key question to understanding its erosional past and future is to better estimate where erosion is occurring, at what rates, and to isolate pluvial-based gully erosion of the flanks from oceanic wave abrasion of sea-cliffs. Back home at Goddard Space Flight Center in Maryland, we have been using high resolution stereo satellite imagery to provide one estimate of this, but the extreme relief (gullies and canyons with sheer walls up to 30 meters high) is difficult to accurately resolve with standard space-based stereo pair imagery. Thus, we deployed small commercial drones during our field visit to map the entire island at greater than 10-times the resolution of even the best commercial satellites. With more time and cooperative weather, we could have flown lower and achieved even finer resolution, but we did not want to risk flying the drone to a watery grave in questionable weather. The drone imagery will be processed using structure from motion (SfM) techniques that are better able to resolve the high relief topography than we can achieve with simple stereo pairs. We collected such imagery last year as well, so when processing and analysis is complete, we will have useful estimates for the quantity of erosion occurring in different regions, and from different processes (rainfall vs oceanic wave abrasion). As we also installed a precipitation gauge on this expedition, in the future we will be able to quantitatively model observed erosion as a function of rainfall amounts and rates. The other key question about the island’s future is whether a hydrochemical process termed palagonitization is, in the presence of heat and water, cementing the layers of ash into a much more durable substance, termed palagonite. If the core of the cone is slowly cooking into palagonite, it will be much more likely to resist erosional forces for many decades or longer. If this process is not occurring, then the observed erosional forces may reduce the island to little more than a shoal in a couple of decades. During our visits, we have collected small fragments of palagonite-looking minerals (lab analysis is needed to confirm), and areas exposed along the southern cliffs (where the rate of oceanic wave erosion is significant) visibly resemble exposed palagonitized zones on Capelinhos and Surtsey. A key finding from this expedition included areas of substantial subsurface heat, detected along the shore of the crater lake at depths from the surface to less than a meter. We had hoped to find cracks venting hot gases in places, and brought along an infrared camera to help detect such, but in the end, a student literally stumbled across this sub-surface heat. While handling a small raft (deployed for sonar analysis of the crater lake bottom), her legs plunged through the soft sediment at the edge of the lake to a depth of a few feet, and found unusually warm pockets. We confirmed temperatures of 100-130F in several zones around the lake, simply by pulling up the sediment by hand. In any case, this suggests residual heat is circulating near the surface, and therefore may well be doing so within the core of the cone, establishing a critical condition for the formation of palagonite. Along with helping to answer these key questions, our visit facilitated exploration of other important facets of the island’s evolution, including: bathymetric surveys of the crater lake and coastal shallows; surveys of the flora and fauna; and surveys of and collection of accumulated garbage. One change that I found particularly striking from one year ago was the development of the vegetation establishing itself on the new land. In 2018, there were three primary patches of vegetation: two in depositional areas to the northwest and southwest of the cone (near the pre-existing Hunga Ha’apai island), and one on the western flank of the cone itself. Last year, the northwest patch was heavily dominated by near mono-culture of beach morning glory, but this year was host to a much more diverse assemblage of plants (including morning glory, but no longer dominated by it). Conversely, the patch on the western flank of the cone appeared less diverse than last year. However, last year it hosted a boisterous colony of nesting sooty terns, while this year there were no birds present. Which highlights another major change – the distribution of bird life on the island system. Last year we found nesting sooty terns in two large aggregations numbering likely 1000 birds or more in total in the center and west of the system. This year, however, those areas were entirely unused by the terns, while a smaller nesting colony was found far to the east, abutting the pre-existing Hunga Tonga edifice. We also observed rats and owls, and thus suspect there may be more complex ecological interactions at play here (the terns nest on the ground). However, other bird species were more prevalent, including species not seen last year such as red-footed boobies, tropicbirds, and a good number of petrels and shearwaters. We also saw a much larger number of frigate birds (both ‘lesser’ and ‘greater’ species), than the few observed last year. Although we had a dedicated observer to survey birds (and plants) which helped substantially with bird identification (and relieved the pressure to do this myself, personally imposed as an amateur birder), it was still obvious to me that more species, and more individuals of more species (except the sooty terns), were present this year. The bird life was particularly active around Hunga Tonga, which appears cut from an exotic island adventure film: mostly sheer cliffs rise up to over 400 feet, facing the black volcanic cone (which you could readily imagine emitting a column of smoke), and draped in thick tropical greenery. At Hunga Tonga’s flat top, which appears entirely inaccessible without climbing gear, tropical trees and palms sway in the wind, while scores of brown boobies, noddys, frigatebirds, and tropicbirds soar and call. With the overhead avian cacophony providing the soundtrack, the scene of a lost tropical paradise juxtaposed against the new, somber and foreboding cone of the crater suggests a primeval landscape from a different age. – Dan Slayback, Research Scientist with Science Systems and Applications, Inc., at NASA’s Goddard Space Flight Center
What is the difference between economic deflation and a depression and where is the United States Economy going? Professor Richard Wolff joins Thom Hartmann to discuss the difference between deflation and a depression. A deflation is the opposite of an inflation. There is a general decline in prices, usually together with wages and salaries also going down. There is no necessity that a depression goes together with a deflation. It can happen together if people in town don't have enough money to go to the store, and so the storekeepers become desperate that they're never going to move stuff off the shelf, so, to keep business going, they begin to drop prices. Then, you have this back and forth between falling prices and falling wages, kind of causing each other.
Posted Saturday, March 1, 2014 at 8:14 AM Because the healthiest foods on the planet are mostly carbs - fruits, vegetables, whole grains, and legumes. Fiber is a carbohydrate! This is not a low-carb diet, quite the opposite. We're shifting people away from animal products to plants. The result usually is more carbohydrate intake and less fat intake. This is how the healthiest people eat and live disease-free lives. Susan Levin, MS, RD PCRM Director of Nutrition Education
Believe it or not, the term United Nationswas actually coined by Franklin Roosevelt and Winston Churchill while the British prime minister was sitting in a bathtub. (Churchill had the habit of thinking and writing while in the tub.) Churchill was in Washington over the New Year’s holiday 1941-42 and the two men were struggling with what to officially call the group of nations that was about to sign the Atlantic Charter. Churchill would write in Volume III of his The Second World War: Read the rest of the article... The title of “United Nations” was substituted by the President for that of “Associated Powers.” I thought this a great improvement. I showed my friend the lines from Byron’s Childe Harold:Here, where the sword United Nations drew, Our countrymen were warring on that day! And this much—and all—which will not pass away. The President was wheeled in to me on the morning of January 1. I got out of my bath, and agreed to the draft. Copyright 1997-2015, by David Wilton
"Happiness lies in the joy of achievement and the thrill of creative effort." Franklin D. Roosevelt By: Proserpina , 8:29 PM GMT on June 16, 2012 Papyrus was useful and certainly easier to use than the earlier writing surfaces, but the plant was grown only in the Nile region and the availability was up to the papyrus producers of that area. When Alexandria temporarily cut off the export of the papyrus a new easily available source for a writing surface had to be found. In the 2nd century BC a great library was set up in Pergamom ( Bergama in modern Turkey) which rivaled the famous Library in Alexandria. Not wanting competition from another library, Alexandria stopped the export of papyrus. As the availability of papyrus became scarce, Pergamon developed a new writing surface made from skins of animals. Pergamon became a production center for this new writing surface called parchment. Animal pelts were easily available to make parchment, moreover parchment had other advantages over papyrus. It was much more durable than papyrus and withstood hard wear and usage. In addition parchment was not easily destroyed by fire.Parchment pretty much replaced papyrus in the 4th century and was popular until the late Middle Ages when paper replaced parchment. Of course leather had been used as a writing material for a long time before the use of parchment. The first mention of Egyptian documents written on leather goes back to the Fourth Dynasty but the oldest known animal skin scroll is probably from the Egyptian 12th Dynasty. But even though both materials are derived from animal skins, leather is not truly parchment. The process treatments are different and the resulting surfaces are very different. Parchment is not the only writing surface made from animal skins vellum is also made from animal skins. Although the two words often are used interchangeably, there is a difference. Parchment is generally made from the split skins of calf, sheep, and goats. Vellum is fine parchment made from calf skin. I will use parchment and vellum interchangeably. Etymology of the words parchment and vellum: Parchment : ‘From Middle English parchement, from Old French parchemin, via Latin pergamīna, from Ancient Greek Περγαμηνός (Pergamēnos, “of Pergamun”), which is named for the Ancient city of Pergamon (modern Bergama) in Asia Minor, where it was invented as an expensive alternative for papyrus.’ ( Wiktionary) Vellum: Vellum is derived from from the Latin word 'vitulinum' meaning "made from calf", leading to Old French 'Vélin' ("calfskin") (Wikipedia) Below is a brief description of the making of parchment: As stated above, parchment was made from animal skin. After the pelt was flayed, the skin was soaked in water to remove grime and blood. Then it was placed in a wooden or stone vat that contained a dehairing solution that included lime. While in the vat it was stirred several times a day. The unhairing took eight or more days. Once de-haired, the skin was soaked in water to make it workable then the skin was placed on a stretching frame. Both sides of the skin were scraped with a special curved knife to remove the last of the hair and to get the skin to the right thickness. To smooth the surface and to make the ink penetrate more deeply, the parchment was rubbed with pumice powder while still wet and on the frame. Once dry and taken off the frame the skins kept their form. How was parchment used? The following are a few examples: “Parchment, like leather, was used to make scrolls, however parchment lent itself best to the codex form of book. The Romans used parchment tablets and possibly small "notebooks" for writing drafts and notes. To protect fragile papyrus scrolls while being handled, the Romans made covers out of parchment. These covers were called paenula and were often brightly colored. In addition, a small parchment strip, called a titulus or index, was attached to each scroll. These strips carried the title of the work and were also brightly colored.” (http://papyri.tripod.com/vellum/vellum.html) Another early use of parchment was to make Portolan charts. Portolan charts are navigational maps based on compass directions and estimated distances observed by the pilots at sea. They were first made in the 13th century in Italy and later in Spain. Below is an example of a Portolan chart made in 1533 by Jacobo Russo from Messina. Most of the finer medieval manuscripts, illuminated or not, were written on parchment, as were some of the Buddhist texts. All Sifrei Torah were (and still are) written on kosher klaf or parchment. Page from an early 15th Century French Book of Hours depicting St. George slaying a dragon. A page from a Torah A quarter of the 180 copy edition of Gutenberg’s Bibles printed in 1455, were printed on parchment. Parchment was used for paintings, especially if they needed to be sent long distances. Parchment was also used for drawings and watercolors. Music on parchment Pen drawing on parchment, Middle Ages Watercolor on parchment “Parchment has traditionally been used instead of paper for important documents such as religious texts, public laws, indentures, and land records as it has always been considered a strong and stable material. The five pages of the U.S. Constitution as well as the Declaration of Independence, the Bill of Rights, and the Articles of Confederation are written on parchment.” (Quote from: http://www.archives.gov/preservation/formats/paper -vellum.html) John Hancock, the President of the Congress, was the first to sign the sheet of parchment measuring 24¼ by 29¾ inches. For Interesting information about the history of the Declaration of Independence and its engrossing on parchment please go to: http://www.archives.gov/exhibits/charters/declara tion_history.html Modern usage of parchment /vellum: British Acts of Parliament are still printed on parchment, as are those of the Republic of Ireland. Also it is still used for Jewish scrolls of the Torah. Luxury bookbinding, memorial books, documents in calligraphy are a few more examples of modern use of parchment. In some universities the word parchment is used to refer to the certificate presented at graduation ceremonies (even though the modern document is printed on paper). Some universities, for doctoral graduations, give the option of having the certificates written by a calligrapher on parchment. The University of Notre Dame still used animal parchment for its diplomas, as does the University of Glasgow. Tempera and gold on parchment by Niccolo da Bologna Quote from William Shakespeare’s “Hamlet”, Act 5, Scene 1 Hamlet: Is not parchment made of sheepskin? Horatio: Aye, my lord, and calves’ skins too. Comments will take a few seconds to appear.
30 November 2012 New evidence for water and organics on Mercury by Will Parker Scientists say data transmitted by the Messenger spacecraft provide compelling support for the notion that Mercury harbors abundant water ice and other frozen volatile materials in its permanently shadowed polar craters. Given its proximity to the Sun, the existence of frozen water is counter-intuitive, but the tilt of Mercury's rotational axis is almost zero, meaning there are areas at the planet's poles that never see sunlight The new evidence is provided by three independent papers published in Science Express. The papers looked at; - Measurements of excess hydrogen at Mercury's north pole obtained with Messenger's Neutron Spectrometer, - Measurements of the reflectance of Mercury's polar deposits at near-infrared wavelengths, and - The first detailed models of the surface and near-surface temperatures of Mercury's north polar regions that utilize the actual topography of Mercury's surface. The new data strongly indicate that water ice is the major constituent of Mercury's north polar deposits. While that ice is exposed at the surface in the coldest of those deposits, the scientists say the ice appears to be buried beneath an unusually dark material across most of the deposits (areas where temperatures are a bit too warm for ice to be stable at the surface itself). "The neutron data indicate that Mercury's radar-bright polar deposits contain, on average, a hydrogen-rich layer more than tens of centimeters thick beneath a surficial layer 10 to 20 centimeters thick that is less rich in hydrogen," writes David Lawrence, based at The Johns Hopkins University Applied Physics Laboratory and the lead author of one of the papers. "The buried layer has a hydrogen content consistent with nearly pure water ice." Data from Messenger's Mercury Laser Altimeter (MLA) corroborate the radar results, says Gregory Neumann of the NASA Goddard Space Flight Center. In a second paper, Neumann and his colleagues report that the first MLA measurements of the shadowed north polar regions reveal irregular dark and bright deposits at near-infrared wavelength near Mercury's north pole. "These reflectance anomalies are concentrated on poleward-facing slopes and are spatially collocated with areas of high radar backscatter postulated to be the result of near-surface water ice," Neumann explains. "Correlation of observed reflectance with modeled temperatures indicates that the optically bright regions are consistent with surface water ice." The MLA also recorded dark patches with diminished reflectance, consistent with the theory that the ice in those areas is covered by a thermally insulating layer. Neumann suggests that impacts of comets or volatile-rich asteroids could have provided both the dark and bright deposits, a finding corroborated in a third paper led by David Paige of the University of California, Los Angeles. Paige and his colleagues provided the first detailed models of the surface and near-surface temperatures of Mercury's north polar regions that utilize the actual topography of Mercury's surface measured by the MLA. The measurements "show that the spatial distribution of regions of high radar backscatter is well matched by the predicted distribution of thermally stable water ice," he reports. According to Paige, the dark material is likely a mix of complex organic compounds delivered to Mercury by the impacts of comets and volatile-rich asteroids, the same objects that likely delivered water to Mercury. The organic material may have been darkened further by exposure to the harsh radiation at Mercury's surface, even in permanently shadowed areas. While the new findings provide compelling evidence for water on Mercury, the dark insulating material raises new questions. "Do the dark materials in the polar deposits consist mostly of organic compounds?" ponders Sean Solomon, principal investigator of the Messenger mission. "What kind of chemical reactions has that material experienced? Are there any regions on or within Mercury that might have both liquid water and organic compounds? Only with the continued exploration of Mercury can we hope to make progress on these new questions." Discuss this article in our forum Asteroid bombardment pushed life into fast lane Stars manufacturing complex organic matter? Discovery of asteroid water hints at oceans' origins Strong evidence for liquid water in comet
What Causes Back Pain? Back pain is the second most common cause of missing work (only after the common cold) and contributes to about 93 million lost workdays and $5 billion in health care costs every year! An astounding eight out of ten people will have back pain at some point in their lives and one in four Americans currently experience back pain. Back pain that lasts more than three months is considered chronic, a type of pain Harvard, Stanford and McGill neuroscientists, who study brain function, say impairs more than your physical body. Chronic pain actually alters brain function! This leads to surprising effects, such as impaired attention, short-term memory, judgement and social skills! Additionally, Harvard Medical Center reports that chronic pain contributes to mood disorders, including depression and anxiety. Other problems resulting from chronic pain include sleeping difficulties, loss of coping skills, and damaged relationships with friends, family and significant others. Chronic pain is becoming more and more common in people with office jobs. In fact, people who work in offices are specifically more likely to suffer from chronic back pain than people who have a physically demanding job! How your body is positioned throughout the day is a major contributor to back and neck pain. The three most common causes of back pain are: 2. Holding your telephone between your ear and your shoulder 3. Lack of movement during the work day Here are a variety of tips anyone can use to optimize their workstation to reduce back pain! Customize Your Chair and Desk! Dr. Scott Donkin, founder of Occupational Health and Wellness Solutions consults workplaces on safety, ergonomic and health issue and states that the act of leaning forward in your chair crushes the disks in your lower back and puts strain on your neck and shoulders. San Francisco State University’s Dr. Erik Peper recommends these tips to help yourself naturally lean back as you work. Optimize Your Phone Calls Many people tuck their phone between their head and shoulder to free up their hands while talking, causing intense strain on their neck and shoulders. Try the following alternatives to avoid tucking your phone during your conversations. Get Up and Move! People are made to move! Sitting (or even standing) in one position for an 8 hour workday can wreak havoc on your body! To learn more about sit-stand workstation, check out Ergotron’s website. For more resources on creating a healthy workstation (including links to programs reminding you to take micro breaks), check out OSHAs recommendations on comfortable sitting at work! Dr. Peper also has some great resources on his website, including clocks reminding you when to take micro breaks at work.
Scientists have long known that veins of gold are formed by mineral deposition from hot fluids flowing through cracks deep in Earth’s crust. But a study published today in Nature Geoscience has found that the process can occur almost instantaneously — possibly within a few tenths of a second. The process takes place along 'fault jogs' — sideways zigzag cracks that connect the main fault lines in rock, says first author Dion Weatherley, a seismologist at the University of Queensland in Brisbane, Australia. When an earthquake hits, the sides of the main fault lines slip along the direction of the fault, rubbing against each other. But the fault jogs simply open up. Weatherley and his co-author, geochemist Richard Henley at the Australian National University in Canberra, wondered what happens to fluids circulating through these fault jogs at the time of the earthquake. What their calculations revealed was stunning: a rapid depressurization that sees the normal high-pressure conditions deep within Earth drop to pressures close to those we experience at the surface. For example, a magnitude-4 earthquake at a depth of 11 kilometers would cause the pressure in a suddenly opening fault jog to drop from 290 megapascals (MPa) to 0.2 MPa. (By comparison, air pressure at sea level is 0.1 MPa.) “So you’re looking at a 1,000-fold reduction in pressure,” Weatherley says. Flash in the pan When mineral-laden water at around 390 °C is subjected to that kind of pressure drop, Weatherley says, the liquid rapidly vaporizes and the minerals in the now-supersaturated water crystallize almost instantly — a process that engineers call flash vaporization or flash deposition. The effect, he says, “is sufficiently large that quartz and any of its associated minerals and metals will fall out of solution”. Eventually, more fluid percolates out of the surrounding rocks into the gap, restoring the initial pressure. But that doesn’t occur immediately, and so in the interim a single earthquake can produce an instant (albeit tiny) gold vein. Big earthquakes will produce bigger pressure drops, but for gold-vein formation, that seems to be overkill. More interesting, Weatherley and Henley found, is that even small earthquakes produce surprisingly big pressure drops along fault jogs. “We went all the way to magnitude –2,” Weatherley says — an earthquake so small, he adds, that it involves a slip of only about 130 micrometers along a mere 90 centimeters of the fault zone. “You still get a pressure drop of 50%,” he notes. That, Weatherley adds, might be one of the reasons that the rocks in gold-bearing quartz deposits are often marbled with a spider web of tiny gold veins. “You [can] have thousands to hundreds of thousands of small earthquakes per year in a single fault system,” he says. “Over the course of hundreds of thousands of years, you have the potential to precipitate very large quantities of gold. Small bits add up.” Weatherley says that prospectors might be able to use remote sensing techniques to find new gold deposits in deeply buried rocks in which fault jogs are common. “Fault systems with lots of jogs can be places where gold can be distributed,” he explains. But Taka’aki Taira, a seismologist at the University of California, Berkeley, thinks that the finding might have even more scientific value. That’s because, in addition to showing how quartz deposits might form in fault jogs, the study reveals how fluid pressure in the jogs rebounds to its original level — something that could affect how much the ground moves after the initial earthquake. “As far as I know, we do not yet incorporate fluid-pressure variations into estimates of aftershock probabilities,” Taira says. “Integrating this could improve earthquake forecasting.”
This year’s theme for National Diabetes Month is “Stay one step ahead of diabetic eye disease.” Health professionals and community leaders are asked to encourage people to take steps to protect their vision.“As a vision care provider, it is my duty to educate patients and their families on how to protect and improve their vision,” says Dr. Stewart Shofner. How Does Diabetes Affect Vision? It’s a common question, as some may not relate blood sugar levels to vision problems. Diabetes can cause changes in the blood vessels of the retina, like swelling and leakage or the creation of new blood vessels. It’s important to know that Diabetic eye disease often has no early warning signs. Finding retinal disorders as early as possible is critical to potentially preventing serious disease progression and even vision loss. Hence why it’s imperative that those who are pre-diabetic or have diabetes get a comprehensive, dilated eye exam at least once a year. How is Diabetic Eye Disease Detected? - Digital retinal imaging uses high-resolution imaging systems to take pictures of the inside of your eye. This helps vision care professionals assess the health of your retina and helps them to detect and manage such eye and health conditions as glaucoma, diabetes, and macular degeneration. Finding retinal disorders as early as possible is critical to potentially preventing serious disease progression and even vision loss. In effort with the National Eye Health Education Program (NEHEP), Shofner Vision Center will continue to spread awareness through our newsletter and social media outlets about eye health and help promote the following message among people with diabetes: People with diabetes need to get a comprehensive eye exam at least once a year and keep their health on TRACK to prevent vision loss. Scheduling a comprehensive dilated eye exam at least once a year is a vital part of that care, considering potential eye complications such as cataracts, macular swelling, and optic nerve damage. Shofner Vision Center will provide consistent and mindful care to help diabetic patients keep their vision and treat impairment. Contact your local vision care center professional or if you are in the Nashville area, contact Dr. Shofner at (615) 340-4733. Share and use these hash tags below to help spread this important awareness. #visionontrack, #DiabetesAwarenessMonth, #diabeticeyediseaseawarenessmonth, #diabetes.
Key Expressions When Writing Key Selection Criteria Usually, all the pointers are there, but you need to be able to quickly interpret language. For example, you must be able to quickly discern the differences between ‘demonstrated’ ‘awareness’ and ‘understanding of’, ‘ability to’ and ‘proven record’. Often used in reference to areas of specialisation within a range of different industry types (for example accounting, human resources, or administration) For this descriptor, you must have actually done the work as opposed to having observed it. For example, ‘experience in analysing data’ means you must demonstrably show that you have analysed data in another role or position. Proven record of / Demonstrated Here, you must be able to substantiate any claims to the experience or skill, and with positive outcomes that have been documented. For example, ‘a proven record of planning, implementing and managing projects’ or ‘ demonstrated management experience in a multi-disciplinary environment’ means that that you have to document what you have specifically done and achieved in these areas. Knowledge of, understanding of, awareness of These expressions are often used in reference to policies, practices or the specific responsibilities of a work area. Subtle differences distinguish these terms. ‘Awareness’ involves the least amount of familiarity with a subject. For example, you are aware that a concept or policy exists but are not necessarily familiar with the details or understand the significance of the subject. ‘Knowledge of’ refers to familiarity gained from actual experience or from learning/study. For example, ‘knowledge of recent legislative changes affecting the higher education sector’. ‘Understanding’ is more than knowledge. In this instance, you may have knowledge of a policy in so far as you have read it, but understanding requires that you know why the policy was developed, who it is relevant to, why it is important and what the implications are for related policies. Ability to, aptitude for, the capacity to These words suggest degrees of ability. ‘Aptitude’ suggests suitability or fitness for a task or a talent or flair for a particular skill or quality. ‘Capacity’ generally means that you will be qualified to perform a particular task however you are not expected to have actual experience. For example, ‘capacity to seek and attract research funding’. You would need to demonstrate that you have the necessary skills or qualities and that these could be transferred to the position. ‘Ability’ means having the skills, knowledge or competency to do the task required.
Posted on behalf of Sanjay Kumar. With the successful liftoff of a Geo-Synchronous Launch Vehicle (GSLV) D 5 yesterday, India became the sixth nation to possess cryogenic propulsion rocket technology. The 415-tonne rocket successfully injected a 2-tonne communications satellite into the intended geosynchronous orbit, the Indian Space Research Organization (ISRO) has announced. Cryogenic engines burn liquid oxygen and hydrogen, which liquefy at ‒183 °C and ‒253 °C respectively, and provide more thrust per kilogram of propellant, compared to room-temperature liquid fuels such as hydrazine or to solid fuels. The only countries that had the technology so far were the US, Russia, France, Japan and China. Cryogenic technology is required for putting heavy payloads into orbit, and its lack had been ISRO’s proverbial Achilles’ heel for more than two decades, denting its capabilities. In January 1991 India signed an agreement with the erstwhile Soviet Union to acquire cryogenic engines and also transfer technology. With the break-up of the Soviet Union later that year, and under pressure from the US, which alleged the sale would violate the Missile Technology Control Regime, the Russian government reneged on it promise of technology transfer while agreeing to provide seven cryogenic engines. India was then forced to develop its own cryogenic technology, but its journey has been quite turbulent. Since its first experimental launch in 2001, the GSLV has faced four failures in seven launches. In April 2010, a GSLV fitted with an indigenously built cryogenic upper stage and carrying an experimental communications satellite went off course and fell into the Indian Ocean. In August 2013, a scheduled launch was abruptly cancelled just few hours before lift-off, when a leak was detected in the hydrazine fuel system of the rocket’s second stage. India’s cryogenic technology programme also took center stage in a high-profile scandal when leading scientists S. Nambi Narayan and D. Sasikumaran were arrested in 1994 on espionage charges. Nambi Narayan was later exonerated. With GSLV D5’s success, India will now be able to launch its heavy satellites at a fraction of the price other space agencies charge for launches. The technology will also come handy for the country’s lunar mission Chandrayaan-2, as well as for future manned space flights.
Based on extensive research, and grounded in everyday classroom practice, the authors of this book explore important issues surrounding play in the early years curriculum. The book presents children’s views on, and response to their role-play environment, alongside examples of good classroom practice, and addresses vital questions such as: Critically, the authors present the child’s perspective on play in schools throughout, and argue firmly against a formal, inflexible learning environment for young children. This book will be fascinating to all students on primary education undergraduate courses and early childhood studies. Researchers and course leaders will also find this book a ground-breaking read. 'As well as being useful to students on early childhood studies courses, the book provides an essential reference source for all teachers keen to promote the value of role-play for young children by helping them to create a coherent and compelling case for play within the early childhood curriculum.' - Early Years Update 'The book … offers a useful set of references for anyone wanting to investigate the issues further… and is very topical in its discussion of child-initiated/adult intensive activities.' - Early Years 'They integrate the lessons of their research well and offer sound advice… the questions they've raised over the pages of this wonderful book linger in the readers mind.' - American Journal of Play Introduction 1. Four-year-olds in school: play, policy and pedagogy 2. Perspectives on role play in early childhood 3. Researching children’s perspectives: a multi-method approach 4. Teachers’ perspectives on role play 5. Exploring role play from the children’s perspective 6. Playing with space, place and gender 7. Rethinking role play in reception classes
Coyote -- An Adaptable Pioneer By Clifford Brown Now that coyotes have taken up residence in the Mountain State, discover the facts about this wily mammal and what can be done to control their population. The Twilight Zone By Jeff Hajenga West Virginia caves are home to an intriguing variety of animals, some of which never see the light of day. Holly -- Brightening Winter’s Days By Nanci Bross-Fregonara Five species of native holly trees and shrubs provide beauty to our state as well as food and cover for wildlife. Insects That Chill in Winter By Emily Grafton The multitude of six-legged critters you see in summer spend the winter in various places and life stages. Natural Heritage Update Discover what the Wildlife Diversity staff is discovering about the state’s wild animals and plants. Brookies, Browns, Rainbows -- Why Don’t They Interbreed? By Don Phares Although they may live in the same streams, each species of trout has its unique spawning time and nest location. Field Trip: Coopers Rock State Forest Wildlife Diversity Notebook: Osprey
What can you do with a History Major? What can you do with a history major? As a history major at UW Oshkosh, you will not only learn a great deal about the histories of particular places and times, but you will also develop some very broad skills, including: clear, concise, good writing, research skills, analytical reading and critical thinking. So, to answer the question "What can you do with a history major?" think of what you can do with highly developed writing, reading, research and thinking skills. There are a lot of possibilities. Career Services offers a one-credit class in skills for getting a job called "Professional Career Skills in Social Science" (IDS 209) and one called "Professional Career Skills" (Prof. Counseling 202) in which history majors can hone skills that help them to find jobs. These are offered every semester. Check Titan Web. Also, the History Department offers a 300-level course entitled "Public History" (HIST 339) which focuses on the work of presenting history to public audiences. This could help students think about careers in museums, national parks, the entertainment industry and other venues for public history. Join the History Club to get and attend History Club events. One or two events a year are focused on careers for history majors and what alumni of the UWO History Department have been doing. Also, join LinkedIn and our LinkedIn group "UW Oshkosh History Department Students, Alumni, and Faculty." This is a nice way to see what alumni are doing and to occasionally hear about jobs and news items to help with your job search. Another way to see what alumni are doing is to check out our alumni page, where some alumni are featured. They got jobs with a history major in a wide array of fields. For careers and master's programs in the field of public history, check out the National Coalition on Public History's site. Their page on careers in public history is here and on master's programs is here. Another helpful website for thinking about what job you might pursue is a blog authored by a history professor from Messiah College in Pennsylvania. Here are his series of posts entitled "So What CAN you do with a History Major?" The history department has a small library of books on the subject, including: Lambert's & DeGalan's Great Jobs For History Majors (2008), Facts on File's Top Careers for History Graduates (2004), Princeton Review's What to Do With Your History or Political Science Degree (2007), and the American Historical Association's Careers for Students of History. These are located in the History Department Conference Room (Sage 3631). See a member of the department if the room is closed.
A domain is a user-friendly and distinctive website address which you're able to acquire for your web site. It designates a numeric IP address that is applied to identify sites and units on the World Wide Web and it is incredibly easier to remember or share. Each and every domain contains two different parts - the particular name that you choose plus the extension. For example, in domain.com, “domain” is known as Second-Level Domain and it is the part you have the option to pick, whereas “.com” is the extension, which is referred to as Top-Level Domain (TLD). You are able to buy a new domain via any accredited registrar organization or relocate an active one between registrars when the extension allows this option. This type of a transfer does not change the ownership of a domain name; the one thing that changes is the place where you are able to handle that domain name. Most of the domain name extensions are free for registration by any entity, yet numerous country-code extensions have particular prerequisites for instance regional presence or an active company registration.
Staying warm is one of those essentials for human life — like food, water, clean air. In the northern latitudes, staying warm requires good shelter. To rely only minimally or not at all on outside utilities (e.g., for electric heat) requires smartly-designed good shelter. In Sooke, B.C. on Vancouver Island, we met a family who are building a cob house with geothermal space heating that stores summer heat to warm the house all through the winter. Mary Coll, Steve Unger and their delightful children Chloe and Finn live at InishOge Farm (Gaelic for “wee island”). They’re breathing life back into an early settler’s farm — complete with chickens, turkeys, pigs, orchards, and with visions of further permaculture-inspired projects. They’re building their house where the older farmhouse stood. They lovingly dismantled it and are re-using its materials, both to build the tiny house they now live in (12×14 feet plus sleeping loft), and a cob house under construction. One focus of our video tour was heat and cold in their cob home. On the south side, the sun’s heat is stored in thick cob outer walls. In winter the low sun slants through dual-glazed windows to warm the cob floors and back wall. Under the house is an earth “battery” made of crushed rock and sand. Rooftop solar thermal panels send hot water through pipes interwoven in the rocks. The rocks are warmed over the summer months, holding more and more heat. Come cooler weather, they slowly lose their heat in the winter, keeping the house warm (this is called an annualized geo-solar, AGS). On the cold north side, super-insulated walls keep the house from losing that warmth during winter, and keep out heat during summer. There’s also a cooler room whose natural refrigeration works like an indoors root cellar. Enjoy watching the video not only for the house tour, but for Steve and Mary’s perspectives on storing their wealth not in banks or the power company, but in their learning and the land. Plus, Chloe and Finn give us a personal tour of the Harry Potter room under the stairs. Magic! [Inishoge.ca]
- brightness; shininess; luster - bright or shining attire Origin of sheen; from sheenthe Archaic of shining beauty; bright Origin of sheenME schene < OE sciene, beautiful, splendid, akin to Ger schön (< IE base *(s)keu-, to observe, heed > hear): sense infl. by assoc. with shine Dialectal to shine; gleam - Glistening brightness; luster: the sheen of old satin in candlelight. - Splendid attire. - A glossy surface given to textiles. Origin of sheenFrom Middle English shene, beautiful, from Old English scīene. - A surname.
Land Stewardship Program The Bruce Trail Conservancy manages thousands of hectares of Escarpment land. The BTC Land Stewardship Program was implemented in order to effectively care for this significant land, and is the largest program of its kind run by a non-government organization in Ontario's history. Two full-time staff are dedicated to the Land Stewardship Program. Under this program, staff develop stewardship plans for each property owned and/or managed by the BTC. Prepared from detailed ecological inventories and background research, these plans identify key stewardship issues for the improvement, maintenance and/or protection of the property's features and are used to guide the management of the property in a manner that is consistent with the mission and values of the BTC. Volunteer Land Stewards Volunteers are a vital component of the Land Stewardship Program. The primary volunteers in this program are the Land Stewards who are the caretakers and eyes of the land. Land Stewards visit their assigned properties at least twice a year, complete annual reports on the conditions of the property, provide input into the stewardship plans, and help to organize and carry out stewardship activities such as tree planting in abandoned fields, garbage removal, and installing signs and fences. The activities of the Land Stewards are overseen by nine volunteer Land Steward Directors.
No country has solved the problem of how to ensure that all of its people have enough safe, nutritious food to eat year round, and the variety of approaches is both bewildering and informative. Australia, for example, has a welfare system that doesn’t make any specific provision for food. But it does exempt certain healthier foods – such as fruit and veg, bread, fresh meat, milk and eggs – from the Good and Services Tax. That makes them cheaper than they might otherwise be, a sort of thin subsidy. And yet, Australians prefer to spend more to eat an unhealthy diet. They devote almost 60 cents of every dollar they spend on food to unhealthy stuff. What’s going on? Professor Amanda Lee looked at the cost of what Australians actually eat, based on a large survey, compared to the cost of the country’s national guide to healthy eating. The results were pretty surprising, so surprising that for a while journals refused to publish. Less of a surprise, perhaps, was that people give the answers they think researchers want to hear: among the poorest communities, fully a quarter of the calories actually consumed are missing from reports, and people say they eat eight times more fruit and veg than they actually do. - The research paper that prompted our conversation was Testing the price and affordability of healthy and current (unhealthy) diets and the potential impacts of policy change in Australia. - Another important paper is Are Healthy Foods Really More Expensive? It Depends on How You Measure the Price, from the USDA. - I gave up trying to find a picture of Australian junk food; it looks just the same as more less all junk food, except for the Cherry Ripes. The banner photograph is a detail from Lizard Dreaming 2 by Sue Atkins, a descendant of the Boandik People from Adelaide in South Australia. The image took some tracking down, because the original site had been hacked in various horrible ways, and I have not asked for permission.
Denoting an entombment place mirrors a human requirement for a blessed site of recognition. Headstones are very important for few people. Albeit a grave remembers the actual remaining parts or profound passing of an individual, burial grounds are shared articulations of recognition. Graveyards are the outgrowths of the networks that make them. They are central focuses for family, strict, or ethnic festivals that honor the dead and encourage a feeling of gathering personality among the living. As settlement examples and networks’ mentalities toward death have changed, so have the holy spaces related to recalling the dead. Consequently, burial grounds give an understanding of the convictions and estimations of past ages. There are almost 250 graveyards recorded in the Prince George’s County Historic Sites and Districts Plan. These graveyards mirror a wide scope of internment customs and represent the advancement and social legacy of the province. Notwithstanding laws that bear the cost of exceptional security These all here to the cemetery, graveyards are undermined by surrender, disregard, and advancement. Luckily, there is incredible interest among nearby networks, social legacy associations, and residents in ensuring the area’s notable burial grounds. This burial ground protection manual was created by the Maryland-National Capital Park and Arranging Commission (M-NCPPC) in light of solicitations from local gatherings and people for data on appropriate graveyard safeguarding and support methodology. It is trusted that this manual will reinforce neighborhood burial ground safeguarding endeavors and create extra help for the insurance of a memorable graveyard. The reason for the Prince George’s County Cemetery Preservation Manual is to help burial ground proprietors, government offices, local area associations, descendants, and intrigued residents in safeguarding and keeping up the notable cemetery. This manual gives an outline of the historical backdrop of neighborhood entombment customs and depicts regular graveyard types found in Prince George’s County. As numerous graveyards in the area are undermined by surrender or disregard, this manual presents steps to aid the protection and support of noteworthy cemeteries. The Prince George’s County Cemetery Preservation Manual tends to numerous basic burial ground conservation questions including - Why are graveyards imperative to safeguard? - How would I be able to get familiar with the historical backdrop of a specific graveyard? - How would I start a burial ground protection project? - What are the suggested techniques for essential graveyard support? - What are the suggested medicines for depressed, shifted, or broken headstones? - When would it be advisable for me to look for proficient help? - Where would I be able to discover more data about graveyard safeguarding? Burial ground conservation endeavors can be agreeable and compensating exercises that lock in networks in neighborhood culture and history. The data introduced in the Prince George’s Province Cemetery Preservation Manual can help gatherings and people to save these significant parts of the area’s social legacy. The Prince George’s County Cemetery Preservation Manual isn’t proposed to be a complete specialized manual for burial ground protection strategies. All things considered, the manual proposes a methodology that can be utilized by associations and people to embrace a burial ground safeguarding exertion. It gives data on regular graveyard issues and suggestions for essential upkeep and conservation methodology. This handbook to diagrams further developed safeguarding methods that ought to be embraced by conservation subject matter experts or expert conservators; notwithstanding, the suggested medicines are momentarily depicted so people and associations can assess a conservator’s proposed way to deal with burial ground conservation.
from The American Heritage® Dictionary of the English Language, 4th Edition - A peninsula of northern Europe comprising mainland Denmark and northern Germany. The name is usually applied only to the Danish section of the peninsula. The largest naval battle of World War I was fought by British and German fleets off the western coast of Jutland on May 31-June 1, 1916. from Wiktionary, Creative Commons Attribution/Share-Alike License - proper n. a peninsula in northern Europe which belongs to Denmark from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. - n. an indecisive naval battle in World War I (1916); fought between the British and German fleets off the northwestern coast of Denmark - n. peninsula in northern Europe that forms the continental part of Denmark and a northern part of Germany Sorry, no etymologies found. A few months ago there was the pin-point attack on the Gestapo Headquarters in Jutland, where they had taken over some of the colleges of the University of the town of Aarhus. In 1944 we had 328 cases of railway sabotage in Jutland, on the mainland from north to south, and it has been intensified during the two first months of 1945 so that in January and February alone we had 247 cases of railway sabotage. Of course up in Jutland you find some large estates, but Danish farms generally run from 25 to 50 acres, and in the intensively developed portions you find thrift and energy, and the ground well cultivated, fertilizers being used in great quantities, with the result that that little country which produces per capita its full share of the world's production gets 87 percent of that production from agriculture. Guthini in Jutland, the Usipeti in Westphalia, the Sigambri in the duchy of Berg, were German Cimbrians. My grandparents have swum in the North Sea all their lives, mainly from the coast of Jutland, which is even further towards the North Pole than Norfolk. Retired vice admiral Sir Reginald Bacon, a protégé of Fisher, the first captain of Dreadnought, and a staunch Jellicoe admirer, wrote a book titled The Jutland Scandal. It consists of a peninsular portion called Jutland, and an extensive archipelago lying east of it. The fishing village of Skagen -- the Skaw -- lies where the northern-most tip of the Danish peninsula known as Jutland bends East and breaks the surging waters of the Kattegat and Skagerak seas. In my "Authoress of the Odyssey" I thought "Jutland" would be a suitable translation, but it has been pointed out to me that Designed by Eva Harlou the Z-House is a 700 sqm house located on the eastern coastline of Jutland, Denmark.
Burke Mountain Preschool is inspired by Maria Montessori and the Reggio Emilia philosophy and it is based on a Tribes principles. By incorporating an inspiring principle in our program, our goal is to offer a program that nurtures the love of learning, provides a sense of curiosity, and respect for self and others. We believe that children are independent learners who want to learn different things at different times and in different ways. We try to foster their learning styles by offering a broad variety of activities. It is each childʼs choice to participate in activities, or, if they choose to, they can pass. We foster childrenʼs learning through play. We believe that without play, optimal learning, normal social functions, self-control, and other executive functions may not mature properly. In play, we learn how to deal with lifeʼs wins and losses with grace. The best thing we can give our children is self-control. Preschool childrenʼs ability to resist temptation is a much better predictor of eventual academic success than their IQ scores. We promote self-control not by making children sit still, but by encouraging them to play. One of the major ways that children adapt to their circumstances is through play. We promote learning through simple science experiments where children can explore their own ideas. By incorporating childrenʼs ideas in our curriculum, we show them that their words and ideas are important. By answering open-ended questions, children become independent thinkers, problem solvers and risk takers. More importantly, they will have a love for learning, an excitement about life, and self-confidence that will be a foundation for success and growth for the rest of their lives. A very important part of our day is reflections, when understanding usually happens. Reflection is a wise bird who can describe just what she saw or heard while children worked together. We would welcome all of you to participate in our morning circle and reflections at the end of the day, to watch your children grow. Our focus is on childrenʼs art not craft. We believe that creative art is a language for children, and they have to be encouraged to freely express their ideas. We never make models for children and we never judge what they make. “I paint things as I think them, not as I see them.” -Pablo Picaso What is Tribes? Tribes is a democratic group process.The outcome of the Tribe process is to develop a positive environment that promotes human growth and learning. In our preschool, we follow the Tribes Agreements: Appreciation/no put downs The right to PASS The right to participant The mission of Tribes is to assure the healthy and whole development of every child, so that each has the knowledge, skills, and resiliency to be successful in a rapidly changing world. Resiliency is the capacity to survive, to progress through difficulty, to bounce back, and to move on positively, again and again in life. In a Tribes classroom, everyone is included and everyone is respected. As a preschool in new developing area, we have a goal to build more than a preschool, but a place where new families moving to the community can meet each other, and maybe become life long friends. What is Montessori? Montessori education is a formula, an aid to life promoting growth to the fullest potential. Following the vision of this extraordinary mathematician, doctor, and educator, we understand that education is a process – it is not a goal. It is a search for truth, not the truth itself. Montessori said: “Education is not what a teacher gives, it is a natural process spontaneously carried out by the individual, and is acquired not by listening to words, but by experiences upon the environment.” What is the Reggio Emilia Approach? The Reggio approach suggests new possibilities of developing skills, knowledge and attitudes in children that could help them become more competent adults and life-long learners. This approach demonstrates a powerfully strong and unconditional respect for children and their ideas, and encourages all involved to be thinkers, creators, communicators, and collaborators as they become more thoughtful and reflective. The teacher is a facilitator of learning. The role of the teacher as a facilitator is to imagine future, possible experiences or activities while trying to stay with the spirit of the childʼs interests, feelings, and areas of pursuit. Ateliers in Reggio Emilia schools promote creative art as a language for children.
What programs dominate in printers There are more and more graphics processing programs, from the most popular Photoshop to less known and free ones like Inkscape. Professionals usually use one or two proven programs, which translates into their high productivity. In the work of graphics or DTP operator you usually need a program for processing vector and raster graphics. You can also include word processors, programs provided by print equipment manufacturers, to the pool of programs needed for such work. Wikipedia definition of printer In computing, a printer is a peripheral device which makes a persistent human-readable representation of graphics or text on paper. The first computer printer designed was a mechanically driven apparatus by Charles Babbage for his difference engine in the 19th century; however, his mechanical printer design was not built until 2000. The first electronic printer was the EP-101, invented by Japanese company Epson and released in 1968. The first commercial printers generally used mechanisms from electric typewriters and Teletype machines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high quality line art like blueprints.Źródło: https://en.wikipedia.org/wiki/Printer_(computing) Printers all-in-one store The DTP operator is a responsible person in printers, publishing houses and wherever materials are printed in large quantities, for the correct preparation of files that will be printed. The task is easy and simple in theory, however, one small error of the DTP operator and the circulation of several thousand copies of newspapers can be thrown away. In this work, many things may go wrong: change of paper for another, new printer with other inks, incorrect conversion of colors. There are also typo errors, bad placement of some element or just a few details. All this can end very badly and bring big losses - so it's work under stress and tension, because if something goes wrong it usually will be just for the DTP operator.
The Liberty-nickel series can be considered the ugly duckling of the 5-cent denomination. It replaced the Shield nickel, which was the first 5-cent coin that did not contain silver (5-cent pieces, first struck in the 1790’s in the form of half dimes, were originally silver coins), and is the predecessor to the immensely popular Buffalo-nickel series. The final date of the series, 1913, is considered a clandestine issue, with only five pieces known to exist, and is essentially non-collectable. Still, this series is worth taking a closer look at, as it comes with an interesting history, and some pieces are surprisingly scarce. In this article we will discuss five different issues that can be purchased for less than $100 each yet provide truly collectible coins. Perhaps, after taking a look at some of these coins, you might be tempted to assemble a complete 33-piece set, which includes only one rare date and only a few very scarce pieces. 1883, No “Cents” Introduced in 1883, the Liberty nickel caused quite a stir when it first entered circulation. The original design by Charles E. Barber featured simply a roman numeral V for the denomination, making it unclear whether the piece was valued at 5 cents or 5 dollars. Unscrupulous individuals took this as an opportunity to gold-plate the 5-cent pieces and pass them off as 5 dollars. Some even went so far as to add a reeded edge, further giving the coins the appearance of being gold pieces (nickels traditionally feature a plain edge). This caused the Mint to change the design, and the word CENTS was soon added to the reverse. A total of 5,474,300 pieces of the No “Cents” variety were struck, and many were saved in higher grades, making this an affordable option for a collector on a budget. For $100 you should be able to find a decent piece graded MS-64. Spend a little more: Collecting type coins in Gem Uncirculated (MS-65) condition is very popular, and this is a popular issue at that grade level. For an 1883, No “Cents” nickel (as they are often called), expect to pay about $150–$180 in Gem Uncirculated condition, which should not be difficult to find. In fact, this is the most affordable Liberty nickel in Gem Uncirculated condition. 1883 With “Cents” After the Mint realized that people were passing off the new nickels as coins with 100 times their actual value, quick actions were taken to add the word CENTS to the reverse. To make room, the motto E PLURIBUS UNUM was moved from the lower portion of the reverse to the top. Of the new type, a total of 16,026,000 pieces were struck, but despite the fact that the Mint produced almost three times as many With “Cents” pieces as No “Cents” pieces, the 1883 With “Cents” nickel is the scarcer type in higher grades. $100 will buy you a piece in Choice About Uncirculated condition, but even at that level they are quite scarce. Most entered circulation and wore down to lower grades. Spend a little more: If you want a nice comparison pair of both 1883 varieties in Uncirculated condition, expect to spend about $200 for a piece graded MS-63. If you’d like a pair in Gem Uncirculated condition, you should be able to find an 1883 With “Cents” nickel in MS-65, but it will cost you about $500. In 1894 the Philadelphia Mint struck a considerably smaller number of nickels than in the years before, creating a scarce date. The issue had a mintage of 5,410,500 pieces, and even though Uncirculated examples are generally available, they tend to be quite expensive. For $100 a collector can only reasonably expect to purchase a Very Fine example of this date, but even at that level I think this is an excellent buy. Most survivors grade Good or Very Good at best; finding a fully original and appealing VF is quite a challenge, yet it won’t break the bank. Spend a little more: Replace the grades in the paragraph above with Extremely Fine or About Uncirculated and it will read the same, except that the values change a bit. This is one of those coins that is easier to find in Uncirculated condition than it is in original EF or AU. Expect to spend about $200 for an EF and $300 for a decent AU—but don’t think that they are easy to find, as they’re not. In 1912 nickels were first struck at facilities other than the Philadelphia Mint (half dimes had traditionally been struck at other mints, but nickels had only been produced in Philadelphia up to that point). Both Denver and San Francisco struck Liberty nickels that year, and while both are scarce, the Denver issue is a bit more easy to find. With a mintage of 8,474,000, you would expect it to be common, but the truth is that it is very scarce in higher grades. $100 will buy you a decent EF/AU, but they are not easy to find at that level and in my opinion make for a very good buy. Spend a little more: The 1912-D is very popular in Uncirculated condition, and prices reflect this. At minimum, expect to pay about $250 to $300 for a piece graded MS-62 or MS-63. Personally, I would try to find a high-end AU-58 with a strong strike and minimal marks (often with better eye-appeal than pieces graded MS-62 or MS-63) and spend about $200. Sharply Struck, Uncirculated Type Coin Nickel is a very hard material, and the Mint has traditionally had a very difficult time fully striking all details of the design. Many pieces, even in Gem Uncirculated condition, come weakly struck. Finding a fully struck Uncirculated Liberty nickel is extremely difficult, and I believe they represent tremendous value. Concentrate your search on coins struck at the Philadelphia Mint between 1904 and 1912, and be prepared to look at many weakly struck coins to find that one fully struck one. The stars should show full detail, especially those on the left side, and all other detail should be sharp. $100 buys an MS-63, but if it’s fully struck it is rarer than a weakly struck MS-65. Spend a little more: A fully struck Gem Uncirculated Liberty nickel with original luster and minimal marks is a very good type coin. Such examples are few and far between, and I believe that only a fraction of all Liberty nickels certified MS-65 display a strike that’s either full or nearly full. With MS-65 type coins selling for about $300 they make for a good buy—but again, don’t expect to find a coin with a full strike very easily. ❑
Maurizio Di Paolo Emilio, who has a PhD in Physics, is an Italian telecommunications engineer who works mainly as a software developer with a focus on data acquisition systems. Emilio has authored articles about electronic designs, data acquisition systems, power supplies, and photovoltaic systems. In this article, he provides an overview of what is generally available in low-noise amplifiers (LNAs) and some of the applications. By Maurizio Di Paolo Emilio An LNA, or preamplifier, is an electronic amplifier used to amplify sometimes very weak signals. To minimize signal power loss, it is usually located close to the signal source (antenna or sensor). An LNA is ideal for many applications including low-temperature measurements, optical detection, and audio engineering. This article presents LNA systems and ICs. Signal amplifiers are electronic devices that can amplify a relatively small signal from a sensor (e.g., temperature sensors and magnetic-field sensors). The parameters that describe an amplifier’s quality are: - Gain: The ratio between output and input power or amplitude, usually measured in decibels - Bandwidth: The range of frequencies in which the amplifier works correctly - Noise: The noise level introduced in the amplification process - Slew rate: The maximum rate of voltage change per unit of time - Overshoot: The tendency of the output to swing beyond its final value before settling down Feedback amplifiers combine the output and input so a negative feedback opposes the original signal (see Figure 1). Feedback in amplifiers provides better performance. In particular, it increases amplification stability, reduces distortion, and increases the amplifier’s bandwidth. A preamplifier amplifies an analog signal, generally in the stage that precedes a higher-power amplifier. IC LOW-NOISE PREAMPLIFIERS Op-amps are widely used as AC amplifiers. Linear Technology’s LT1028 or LT1128 and Analog Devices’s ADA4898 or AD8597 are especially suitable ultra-low-noise amplifiers. The LT1128 is an ultra-low-noise, high-speed op-amp. Its main characteristics are: - Noise voltage: 0.85 nV/√Hz at 1 kHz - Bandwidth: 13 MHz - Slew rate: 5 V/µs - Offset voltage: 40 µV Both the Linear Technology and Analog Devices amplifiers have voltage noise density at 1 kHz at around 1 nV/√Hz and also offer excellent DC precision. Texas Instruments (TI) offers some very low-noise amplifiers. They include the OPA211, which has 1.1 nV/√Hz noise density at a 3.6 mA from 5 V supply current and the LME49990, which has very low distortion. Maxim Integrated offers the MAX9632 with noise below 1nV/√Hz. The op-amp can be realized with a bipolar junction transistor (BJT), as in the case of the LT1128, or a MOSFET, which works at higher frequencies and with a higher input impedance and a lower energy consumption. The differential structure is used in applications where it is necessary to eliminate the undesired common components to the two inputs. Because of this, low-frequency and DC common-mode signals (e.g., thermal drift) are eliminated at the output. A differential gain can be defined as (Ad = A2 – A1) and a common-mode gain can be defined as (Ac = A1 + A2 = 2). An important parameter is the common-mode rejection ratio (CMRR), which is the ratio of common-mode gain to the differential-mode gain. This parameter is used to measure the differential amplifier’s performance. Figure 2 shows a simple preamplifier’s design with 0.8 nV/√Hz at 1 kHz background noise. Its main components are the LT1128 and the Interfet IF3602 junction field-effect transistor (JFET). The IF3602 is a dual N–channel JFET used as stage for the op-amp’s input. Figure 3 shows the gain and Figure 4 shows the noise response. LOW NOISE PREAMPLIFIER SYSTEMS The Stanford Research Systems SR560 low-noise voltage preamplifier has a differential front end with 4nV/√Hz input noise and a 100-MΩ input impedance (see Photo 1a). Input offset nulling is accomplished by a front-panel potentiometer, which is accessible with a small screwdriver. In addition to the signal inputs, a rear-panel TTL blanking input enables you to quickly turn the instrument’s gain on and off (see Photo 1b). The Picotest J2180A low-noise preamplifier provides a fixed 20-dB gain while converting a 1-MΩ input impedance to a 50-Ω output impedance and 0.1-Hz to 100-MHz bandwidth (see Photo 2). The preamplifier is used to improve the sensitivity of oscilloscopes, network analyzers, and spectrum analyzers while reducing the effective noise floor and spurious response. Signal Recovery’s Model 5113 is among the best low-noise preamplifier systems. Its principal characteristics are: - Single-ended or differential input modes - DC to 1-MHz frequency response - Optional low-pass, band-pass, or high-pass signal channel filtering - Sleep mode to eliminate digital noise - Optically isolated RS-232 control interface - Battery or line power The 5113 (see Photo 3 and Figure 5) is used in applications as diverse as radio astronomy, audiometry, test and measurement, process control, and general-purpose signal amplification. It’s also ideally suited to work with a range of lock-in amplifiers. This article briefly introduced low-noise amplifiers, in particular IC system designs utilized in simple or more complex systems such as the Signal Recovery Model 5113, which is a classic amplifier able to obtain different frequency bands with relative gain. A similar device is the SR560, which is a high-performance, low-noise preamplifier that is ideal for a wide variety of applications including low-temperature measurements, optical detection, and audio engineering. Moreover, the Krohn-Hite custom Models 7000 and 7008 low-noise differential preamplifiers provide a high gain amplification to 1 MHz with an AC output derived from a very-low-noise FET instrumentation amplifier. One common LNA amplifier is a satellite communications system. The ground station receiving antenna will connect to an LNA, which is needed because the received signal is weak. The received signal is usually a little above background noise. Satellites have limited power, so they use low-power transmitters. Telecommunications engineer Maurizio Di Paolo Emilio was born in Pescara, Italy. Working mainly as a software developer with a focus on data acquisition systems, he helped design the thermal compensation system (TCS) for the optical system used in the Virgo Experiment (an experiment for detecting gravitational waves). Maurizio currently collaborates with researchers at the University of L’Aquila on X-ray technology. He also develops data acquisition hardware and software for industrial applications and manages technical training courses. To learn more about Maurizio and his expertise, read his essay on “The Future of Data Acquisition Technology.”
World Cities Day (31th of October) On World Cities Day, COMSA Corporation wants to highlight different projects in which its R & D & I department is currently working. Its commitment to research and applied development allows it to offer its clients technological solutions to advance their needs and improve the efficiency of their actions, including those for the construction of buildings that are to shape the cities of the future, also called smart cities, which not only make efficient use of resources but also consider the responsible use of energy. In this sense, COMSA Corporation is developing the GEOTECH project, a hybrid air conditioning system based on geothermal energy with the objective of installing it using the foundations of the buildings. Geothermal energy is a renewable energy that uses the thermal stability of the soil to heat the heat to the building by means of a heat pump, with the differential characteristic of providing both cold and heat without substantial variations between day and night conditions Environmental or time of year. This method will reduce overall energy consumption in the building. Likewise, to combat air pollution, COMSA Corporación is developing self-cleaning photocatalytic coatings that do not require maintenance. The project is based on the incorporation of particles on the glass panels of buildings, technically known as TiO2, which in contact with sunlight favour the degradation of airborne contaminants such as carbon dioxide or hydrocarbons. In recent years, photovoltaic energy is being positioned as a clean and competitive energy, whose use allows the buildings to be more sustainable, so that COMSA Corporation is also focusing its research in this area. Currently, the MIDNATTSSOL project is being carried out, which aims to create a system to detect in real time surpluses of photovoltaic energy and thus maximize the use of solar energy in the air conditioning of the installation. On the other hand, the SOLPROCEL project focuses on the development of semi-transparent organic photovoltaic cells with photonic crystals. These cells, integrated on windows and glass facades of buildings and facilities, improve energy efficiency thanks to its high level of transparency and energy collection. This technology can be very useful for the construction of sustainable buildings by introducing energy self-consumption, as well as reducing the bill of those already existing facilities with an intensive use of energy. This innovation was recently presented at the ‘OPV Workshop: A new technology to market’ held in Barcelona.
from The American Heritage® Dictionary of the English Language, 4th Edition - n. An abrupt change or step, especially in method, information, or knowledge: "War was going to take a quantum leap; it would never be the same” ( Garry Wills). from Wiktionary, Creative Commons Attribution/Share-Alike License - n. The discontinuous change of the state of an electron in an atom or molecule from one energy level to another. - n. An abrupt change. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. - n. a sudden large increase or advance Sorry, no etymologies found. Sorry, no example sentences found.
Last week heralded the long-awaited arrival of a package I had ordered, the content of which seems rather unimpressive at first glance. It consists of a small metal cylinder, with an adjustable lens on one end and a screw on the other: If you look into the lens of this device, called a “spinthariscope”, under most circumstances, you’ll almost certainly see nothing at all. With that in mind, you might be surprised to learn that such humble devices were in fact hugely popular in the early 1900s, being carried both as toys by children and as status symbols by the learned elite! The secret of the spinthariscope’s success comes from the fact that it allows the seemingly impossible — the ability to watch individual radioactive decays happens with the naked eye! So what is a spinthariscope? In essence, it is a self-contained radiation source and detector, and has elements as shown below: A small radioactive source (the details of which we will discuss later) emits alpha particles that collide with a zinc sulfide (ZnS) screen. This screen gives off flashes of light (called scintillations) at the places the alpha particles hit. These minute flashes are magnified by a simple lens and can be viewed through the eyepiece. Every flash the viewer sees is the trace of a single atomic nuclear decay. By adjusting the bottom screw, one can effectively increase or decrease the rate at which alpha particles hit the screen, transforming a flood of particles into a trickle, or vice versa. This is a pretty neat effect, and is worth blogging about in itself, but the spinthariscope also has historical significance: it was the first device invented that is able to detect individual radioactive particles, a precursor to the Geiger counter! I’ve talked about the history and physics of radioactivity a number of times on this blog; see, for instance, here and here. A short history of the discovery of radioactivity will be helpful here. In 1896, radioactivity was serendipitously discovered by Paris researcher Henri Bequerel in the course of his investigations into fluorescence and phosphorescence. Inspired by the recent and sensational discovery of X-rays in 1895 by Wilhelm Röntgen, Becquerel wondered if “glow in the dark” materials might also give off X-rays. He wrapped photographic plates in black paper to protect them from sunlight, then placed a sample of uranium on the plate in the light. Becquerel expected the uranium to fluoresce X-rays, which would then darken the photographic plate, and he seemingly found this to be true. However, when he put the plates and the uranium sample in a dark drawer for a few days, the plates turned out to be developed even darker than before! Evidently some new mysterious emissions from the uranium were darkening the plate, and these emissions were quickly referred to as Becquerel rays. This monumental discovery was overshadowed by the craze over X-rays for a few years, but this changed when Marie Curie began to discover radioactivity in other elements. She discovered radioactivity in thorium, but was scooped by several weeks in publication. However, in 1898 she and her husband Pierre isolated the new radioactive element polonium, and after several years of arduous labor the pair isolated a new and very powerful element, radium. Radium ignited real excitement in the scientific community. It is more than a million times more radioactive than uranium, and seemed at the time to be a limitless energy source. The mystery captured the attention of a number of researchers, among them London chemist and physicist William Crookes (1832-1919). Crookes was already a well-established and successful scientist by the early 1900s. Around 1870 he invented the Crookes tube, an early electrical discharge tube that was instrumental in the discovery of both the electron and X-rays. Crookes’ specialty, however, was spectroscopy, and by measuring the light emission of atoms he discovered the element thallium in 1861, and helped identify the first isolated sample of helium on Earth in 1895. On a less distinguished note, Crookes was an avid investigator of spiritual phenomena, and his credulous nature so irked the scientific establishment that there was talk of depriving him of his Fellow status in the Royal Society. In 1903 Crookes joined in the enthusiastic investigation of the properties of radium. He published many of his results in the journal The Chemical News, perhaps not so difficult considering he was the editor! The discovery that would lead to the spinthariscope was curiously accidental, just like the discovery of X-rays and radioactivity before. Crookes gave an account of it in an April 3, 1903 article entitled, “The emanations of radium”.* His description is somewhat dryer than the later popularized accounts. He begins by explaining his experiments involving radium and its interactions with radiation sensitive materials: A solution of almost pure radium nitrate which had been used for spectrographic work, was evaporated to dryness in a dish, and the crystalline residue examined in a dark room. It was feebly luminous. A screen of platinocyanide of barium brought near the residue glowed with a green light, the intensity varying with the distance separating them. The phosphorescence disappeared as soon as the screen was removed from the influence of the radium. A screen of Sidot’s hexagonal blende (zinc sulphide), said to be useful for detecting polonium reactions, was almost as luminous as the platinocyanide screen in presence of radium, but there was more residual phosphorescence, lasting from a few minutes to half an hour or more according to the strength and duration of the initial excitement. It is to be noted that the only thing observable to Crookes in his initial experiments was a continuous glow. He continues by discussing some of the physical effects of the radioactivity: The persistence of radio-activity on glass vessels which have contained radium is remarkable. Filters, beakers, and dishes used in the laboratory for operations with radium, after having been washing in the usual way, remain radio-active; a piece of blende screen held inside the beaker or other vessel immediately glowing with the presence of radium. The blende screen is sensitive to mechanical shocks. A tap with the tip of a pen-knife will produce a sudden spark of light, and a scratch with the blade will show itself as an evanescent luminous line. A diamond crystal brought near the radium nitrate glowed with a pale bluish-green light, as it would in a “Radiant Matter” tube under the influence of cathodic bombardment. On removing the diamond from the radium it ceased to glow, but, when laid on the sensitive screen, it produced phosphorescence beneath which lasted some minutes. A “radiant matter tube” is simply another name for a Crookes-type tube, which produces a glowing bluish-green beam of electrons. Crookes notes that the diamond glows with a similar light, but draws no immediate conclusions from this. It was with these diamond manipulations that a slip-up led to the remarkable discovery: During these manipulations the diamond accidentally touched the radium nitrate in the dish, and thus a few imperceptible grains of the radium salt got on to the zinc sulphide screen. The surface was immediately dotted about with brilliant specks of green light, some being a millimetre or more across, although the inducing particles were too small to be detected on the white screen when examined by daylight. In popular descriptions, it is usually related that Crookes, having spilled the radium on the surface, used a magnifying glass to seek out the precious specks of radium. This might seem stingy to modern readers, but radium was one of the most precious substances in the world at that point. A paper by W.J. Hammer in the January 1903 issue of The Chemical News relates: An extensive dealer in chemicals informed me recently that the treatment of 5000 tons of uranium residues would probably not result in the production of a kilo. of radium. The present market price in this country is 4.50 dollars per grm., or approximatelly 2000 dollars a pound. And that’s in 1903 dollars! Crookes’ paper continues with a description of the phenomenon under magnification: In a dark room, under a microscope with a 2/3-inch objective, each luminous spot is seen to have a dull centre surrounded by a luminous halo extending for some distance around. The dark centre itself appears to shoot out light at intervals in different directions. Outside the halo the dark surface of the screen scintillates with sparks of light. No two flashes succeed one another on the same spot, but are scattered over the surface, coming and going instantaneously, no movement of translation being seen. Crookes establishes all of the basic properties that would form the spinthariscope in this first paper, even waxing somewhat poetic in his description: A solid piece of radium nitrate is slowly brought near the screen. The general phosphorescence of the screen as visible to the naked eye varies according to the distance of the radium from it. On now examining the surface with the pocket lens, the radium being far off and the screen faintly luminous, the scintillating spots are sparsely scattered over the surface. On bringing the radium nearer the screen the scintillations become more numerous and brighter, until when close together the flashes follow each other so quickly that the surface looks like a turbulent luminous sea. When the scintillating points are few there is no residual phosphorescence to be seen, and the sparks succeeding each other appear like stars on a black sky. It is worth noting that the significance of the discovery was the visualization of the particles, not the realization that radiation consists of particles. At the end of his paper, Crookes himself speculates on the nature of the radiation: It seems probable that in these phenomena we are actually witnessing the bombardment of the screen by the electrons hurled off by radium with a velocity of the order of that of light; each scintillation rendering visible the impact of an electron on the screen. As we will note, Crookes was not quite correct! He wasted little time in making further observations. Some of these were described only a month later, in the May issue of The Chemical News** in an article titled “Certain properties of the emanations of radium”. Even in this short time, researchers had realized that the radioactivity of radium is much more complicated than initially assumed: The emanations from radium are of three kinds. One set is the same as the cathode stream, now identified with free electrons — atoms of electricity projected into space apart from gross matter– identicla with “matter in the fourth or ultra-gaseous state,” Kelvin’s “satellites,” Thomson’s “corpuscles” or “particles”; disembodied ionic charges, retaining individuality and identity. Electrons are deviable in a magnetic field. They are shot from radium with a velocity of about two-thirds that of light, but are gradually obstructed by collisions with air atoms. Another set of emanations from radium are not affected by an ordinarily powerful magnetic field, and are incapable of passing through very thin material obstructions. They have about one thousand times the energy of that radiated by the deflectable emanations. They render air a conductor and act strongly on a photographic plate. These are the positively electrified atoms. Their mass is enormous in comparison with that of the electrons. A third kind of emanation is also produced by radium. Besides the highly penetrating rays which are deflected by a magnet, there are other very penetrating rays which are not at all affected by magnetism. These always accompany the other emanations, and are Rontgen rays — ether vibrations — produces as secondary phenomena by the sudden arrest of velocity of the electrons by solid matter, producing a series of Stokesian “pulses” or explosive ether waves shot into space. This is an excellent time to talk about what we now know about radioactivity and the nature of radium! The nuclei of all atoms consist of collections of positively-charged protons and electrically-neutral neutrons. Though the lighter elements, such as helium (2 protons, 2 neutrons) and oxygen (8 protons, 8 neutrons) are stable, the heavier elements are unstable and tend to fracture into pieces. The instability can be attributed to an excess of energy: the total energy of an element is more than the energy of the individual parts, making the nucleus “want” to break up. There are three methods by which a heavy atom can shed this excess energy; these processes are known as “alpha”, “beta” and “gamma” decay, the mysterious names a reflection of the historical fact that nobody knew what they were! Alpha decay is the release of an alpha particle (aka a helium nucleus: 2 protons, 2 neutrons) from the radioactive nucleus, as is the case in the decay of radium: Radium decays into radon, which in turn decays by alpha decay into polonium: Polonium in turn decays into a radioactive form of lead, and that lead decays via beta decay — the release of a high-velocity electron, turning a neutron into a proton: The third type of radioactive decay is the release of an X-ray, though before it was identified as such it was known as a gamma ray. Because radium follows a complicated decay chain, all three types of radioactive emissions can be seen in a sample of radium to some degree. This deep an understanding of the nucleus and radioactivity would come much later, however; the nucleus alone would not be discovered until 1909, in the famous Geiger-Marsden experiment. Though Crookes could not understand the full details of what he was seeing, he still saw an opportunity in the “turbulent sea” of decays. The end of his May paper describes a new invention — the spinthariscope: A convenient way to show these scintillations is to fit the blende screen at the end of a brass tube with a speck of radium salt in front of it and about a millimetre off, and to have a lens at the other end. Focussing, which must be accurately effected to see the best effects, is done by drawing the lens tube in or out. I propose to call this little instrument the “Spinthariscope,” from the Greek word , a scintillation. The spinthariscope, and all of Crookes’ radium experiments, debuted in person at a fancy party of the Royal Society on May 15th of 1903. This party, a “conversazione“, was held at the historic Burlington House in London and included hands on exhibits of all sorts of intriguing phenomena from all of the natural sciences. A detailed description of the party was provided in a 1903 issue of The Pharmaceutical Journal***, and lists, among other wonders, “five specimens of Hydrophidæ, the poisonous sea snakes which swarm round the coasts of India”, “the new coherer, as applied to wireless telegraphy, shown by Sir Oliver Lodge and Dr. Alexander Muirhead”, and “Rowland’s apparatus for the disintegration of micro-organisms by mechanical crushing when frozen by means of liquid air”. The real attention-grabber, however, was Crookes’ work: Another exhibit which attracted more attention than anything else was that by Sir William Crookes, illustrative of the properties of the emanations of radium. There were autoradiographs, photographs of radium emanations, luminous effects of radium emanations, and an ingenious little instrument which Sir William Crookes calls a spinthariscope, intended as a convenient contrivance to show the scintillations of a piece of radium nitrate. According to other accounts, the spinthariscope was quite a sensation, being sold both as toys and as serious scientific instruments! It didn’t take very long for the ‘scope to be commercialized, as this 1903 ad that appeared in Nature indicates: In hindsight, radium was a very bad choice for a material to be disseminated amongst the general public, much less children. Radium itself is chemically similar to calcium, and can replace the calcium in bones if ingested, leading to cancer. Also, its decay product radon is a gas which also has significant risks of ingestion. The reality of radium’s danger finally came to a head in the 1920s, when a number of women who worked painting luminous radium watch dials fell seriously ill and died from radium poisoning. The story of the “radium girls” was recently told in a three part post by Deborah Blum (here, here and here). Nowadays, spinthariscopes contain thorium or americium, elements which present negligible risk. With all this in mind, I wasn’t sure what to expect from my own spinthariscope. The scintillations from it are so faint that one must sit in the dark for 20 minutes to allow one’s eyes to become sensitive enough to see them. The flyer that came with the device suggested that some people are born without enough light sensitivity and can never see the flashes. This statement made me wonder whether I’d been the victim of an elaborate con: “Oh, you don’t see anything? Well, you’re one of the unlucky ones.” As I sat alone in the powder room, musing over whether there was actually radioactive material in my spinthariscope, I began to notice a dim glow emanating from the eyepiece. I gave it a few more minutes, and looked inside. There it was, Crookes’ “luminous turbulent sea”. I spent quite a few minutes sitting there, looking at the flashes of light wash over the screen like rain on a window, somewhat in awe of the depth of the history and physics that the simple spinthariscope represents. * W. Crookes, “The emanations of radium,” Chemical News 87 (1903), 157-158. ** W. Crookes, “Certain properties of the emanations of radium,” Chemical News 87 (1903), 221. *** “The Royal Society’s Conversazione,” The Pharmaceutical Journal 70 (1903), 714. **** You can get your own spinthariscope from United Nuclear. I got the “Super Spinthariscope”, which has an adjustable eyepiece, though it likely isn’t essential. It’s really hard to focus on an ultra-dim light source that is rapidly flickering anyway!
Griot: Fran Kaplan, EdD What is a “Griot”? “Griot” (pronounced GREE-oh) is the French name given to the oral historians of West Africa. Traditionally griots travel from city to city and village to village as living newspapers, carrying in their heads an incredible store of local history and current events. They pass on their knowledge of history by singing traditional songs, which they must recite accurately, without errors or deviations. Like rappers, they also make up songs as they go to share current events, gossip, political commentary and satire. Being a griot is often an inherited position, and griots generally marry other griots. There are still many practicing griots in West Africa today. Most often they accompany themselves on the kora, a 21-string harp made from half of a large gourd covered with animal skin. The strings, made of gut or fishing line, are plucked with the fingers. Griots may also play other traditional and modern instruments and are often very accomplished musicians. Griots at the Museum At ABHM we call the curators of our exhibits “griots,” because they tell our history. In the former bricks-and-mortar museum, they were the docents who showed groups around the exhibits and helped them discuss and make sense of what they saw and felt. In our virtual museum, griots are scholars who research and write the exhibits and dialogue with visitors through the comments section. Griots in the virtual museum commit to responding to visitors’ comments and questions in the Comments section at the foot of each new exhibit for three weeks. Be sure to visit new exhibits as they are posted, so you can dialogue with scholar-griots! Griots in Africa Today Read about how griots are applying their conflict-resolution skills in Africa right now, and why this traditional peacemaker’s role is threatened, here. See a Video by a Young Person about the Role of the Griot Dr. Fran Kaplan, independent scholar, filmmaker, and social activist, is Coordinator of the America’s Black Holocaust Virtual Museum. She co-authored an award-winning screenplay, Fruit of the Tree, based on the life of James Cameron, and is currently working on a scholarly edition of Cameron’s memoir, A Time of Terror.
Music is very fascinating for many people so it's no surprise that scientists have actually developed a field of study that allows them to investigate how music really works. The field of music studies things like rhythm, harmony, structure, form, and even the texture of different kinds of music. Musical theorists not only study how music works but how people perceive music as well. Music theory can be defined as the study of how music works and affects the world. Elements of Music There are elements of music. Melody is a series of notes that goes in succession. Pitch is defined as how someone perceives the tone of music. Scales and modes are how musical notes are arranged (there are 12). Rhythm is the arrangement of sounds and harmony is how notes are arranged around a melody. Consonance is a harmony with tones that flow well together while dissonance is a harmony with tomes that create complex interactions. Dynamics refers to the loudness or softness of a note and the texture is the overall sound of a piece of music. The form of music represents the syntax of a song. Musical Cognition and Perception Musical cognition is basically an interdisciplinary approach to understanding music and its behaviors. People who study musical cognition go into things like rhythm, how music affects emotions, and even a listener's reaction to music. What happens when someone is listening to an acoustic guitar being played? Meanwhile, musical perception deals with how people perceive music. How do they react to it? Do they understand a melody that is being played? Theories of Harmonization Theories of harmonization include what's known as "part-writing." This is the process of writing songs in parts instead of as a whole. Musical set theory is a theory that lets people categorize certain musical objects and then describe the relationship between them. There are many different theories and some are unique to individual scientists. Semiotics is the study of sign processes and music semiotics is the study of signs and how they apply to music. That means people study the connotations of sounds and what meanings typically apply to those types of sounds. For instance, louder notes correspond with positive emotions. Music semiotics also deals with actual signs for notes as in musical notes which are written on paper known as notations. Notations let musicians know which notes to play. It also applies to the study of the relationship between music and math, exploring how music and math both follow similar principles and even similar rhythms. Many people listen to music without knowing about the components of a piece of music. Understanding a little bit more about music theory will enable a person to appreciate music a little fuller.
Remember 1981? Yes, it’s a bit fuzzy at this point, but that was the year that manned spaceflight became normal. On the 21st of April, the Space Shuttle Columbia rocketed into orbit. Over the next 30 years, 135 launches were made by the fleet. For the generations who grew up or were born during this era, astronauts traveling to and living in space (on board the International Space Station) became commonplace. This normalcy hid the difficulty and danger that were behind the curtain. Rowland White‘s Into the Black recounts the epic effort to design and launch the shuttle. It took nearly as long and was every bit as difficult as the Apollo program. In some ways it was more so: Apollo components had to work once; the Shuttles had to survive the rigors of launch and space over and over. White recounts how the shuttle program was the final project of the Apollo veterans. It was also a fusion of a canceled military space program – complete with astronauts and launch sites – that would be combined with the civilian side. Technologies such as reusable rocket engines and protection from reentry were beyond state of the art. The drama that unfolded was every bit as exciting as what was told in From Earth to the Moon and Apollo 13. Danger was never eliminated, but the later losses of the Challenger and Columbia were not, ironically, cause by failures of the orbiters. None of the shuttles ever failed, repeatedly surviving launch stresses and harsh environments that those of us earthbound cannot imagine. While the shuttles never flew as frequently as envisioned, nor brought the costs of launch down, history will look back on them as making possible what comes next. We are already seeing the turnover of spaceflight to private companies. The International Space Station that the shuttles enabled is an orbital spaceport on the verge of becoming the staging point for new ventures. The government and politics often got in their own way in opening the frontier, but as Into the Black details, the astronauts of the Space Shuttles swung that door wide open.
ESA’s proposed Hera mission will already visit two asteroids: the Didymos binary pair. The Hera team hopes to boost that number by performing a flyby of another asteroid during the mission’s three-year flight. The opportunity arises because Hera will be flying out to match Didymos’ 770-day orbit, which circles from less than 10 million km from Earth to out beyond Mars, at more than double Earth’s distance from the Sun. In the process Hera will pass both multiple near-Earth asteroids and the inner edge of the main Asteroid Belt. Initial studies at ESA’s European Space Operations Centre have turned up dozens of candidate asteroids across different mission scenarios. “Ideally we would like a flyby of another binary asteroid, to enable comparisons with Didymos,” explains ESA’s Hera project scientist, Michael Kuppers. “We would choose something of a different taxonomic type from the S-type asteroids like Didymos. We would also prefer a larger object: its greater size would allow us to resolve it meaningfully from further away.” Take as an example one body researchers would like to see: the 2121 Sevastopol binary pair in the inner belt has an 8.6 km diameter main body with a 3.5 km diameter moon. This system is a member of the poorly understood ‘Flora’ family of main belt stony asteroids, produced by a collision event a relatively recent 100 million years ago – theorised to be associated with the Chicxulub impact that killed the dinosaurs. The next step would be to create a shortlist of targets, which could then be the subject of ground-based observations to determine more about their properties and sharpen knowledge of their orbits before Hera’s launch in late 2023. ESA’s Rosetta comet-chaser performed two asteroid flybys as it passed through the main belt during its decade-long flight to comet 67P/Churyumov-Gerasimenko, passing the 5-km diameter diamond-shaped 2867 Steins and the mammoth 120-km diameter 21 Lutetia. “To make flybys happen, we have to know where our trajectory will pass relatively close to asteroids if we do nothing,” notes Michael Khan, heading Mission Analysis at ESA’s Flight Dynamics division. “Then we tweak the trajectory to make a specific difference to that distance, bringing us much closer. “With Rosetta we had a lot of capability, because it was a large spacecraft with extra fuel in in the tanks to get the mission back on track in case something went wrong. Plus we were performing lots of gravity-assist flybys around Earth and Mars, and massaging those flybys slightly gave us a lot of freedom to manoeuvre. “Hera is not Rosetta, however: this will be a smaller mission with a shorter cruise phase and lower performance limits. We will still try, but the constraints are such that we won’t know for certain which asteroids we could target until after Hera’s launch. “It will come down to what day within Hera’s launch window that we take off, and also the precision of that take-off – it is possible that any extra fuel earmarked for asteroid flybys might be needed to fine-tune our trajectory to Didymos. But any flyby would be an excellent opportunity to boost Hera’s science return even further.”” To compare the two missions, Rosetta was lorry-sized, while Hera will be the scale of a desk. But any asteroid flyby would benefit its end mission as well as offering plentiful bonus science. Michael Kuppers was also part of the Rosetta team: “These hours-long asteroid flybys were quite dramatic events, and our opportunity to try out our scientific instruments and obtain scientific results from these unknown objects, preparing for our main goal of 67P/Churyumov-Gerasimenko.” Hera’s lead scientist Patrick Michel, CNRS Director of Research of France’s Cote d’Azur Observatory, hopes Hera would indeed achieve a flyby: “Any object would be valuable. Each time we’ve encountered a new asteroid we’ve discovered something unexpected.” Hera, Europe’s contribution to an international planetary defence experiment, is currently under study to be presented for approval by ESA’s Space19+ Council meeting of European space ministers.
At 25 oC, bromine is a dense, freely flowing, corrosive, dark red liquid that is easily vaporized into a brownish-red vapor. It was discovered in 1826 and it occurs mainly as the bromide ion, Br-, in salts such as NaBr, KBr, MgBr2, and CaBr2 in sea water, underground salt brines, and salt beds. Bromine is prepared by bubbling chlorine through seawater, which contains dissolved NaBr. Bromine vapor is blown out of the reaction vessel by a current of air and collected. It takes about 2500 L of seawater to produce 160 g of bromine. Bromine is used as a disinfectent. Uses of bromine-containing compounds include:
Pre-KindergartenIn our preschool classroom, students explore themes in a daily circle and play outside for at least 3 hours a day - sometimes more! Choosing between three playgrounds and learning centers in our dynamic classroom, students learn social and emotional literacy through play and heart-centered structure. KindergartenThe Kindergarten program at Odyssey School honors each child’s inherent wholeness, with curriculum developed from Joseph Campbell's mythic work on the Hero's Journey. Students learn what it means to be a hero, how to take on personal heroic challenges, and how to problem solve through mediation and non-violent communication. 1st–2nd GradesFirst and second grade students learn about geography, history, and archaeology in their explorations of continents and cultures, offering a window to the world. This research is used to learn information as well as to spark imagination and curiosity about our global community. Emphasis is placed on cultivating a love of reading and exploration in first grade and deepening development of independent learners, self-sufficient in project-based learning, scientific inquiry, and developing the empathy, communication, and self-awareness to become real world explorers. 3rd–4th GradesOur curriculum is designed to help students use critical and creative thinking skills to make connections across disciplines. Students are guided toward greater independence, as they deepen their understanding of core subject areas, using tools like Goals Notebook, Writer's Workshop, Ted(talk) and Tea time to encounter new ideas and find inspiration in the inventions that have shaped the design of our modern world. Through Class Meetings and a more developed sense of the Design Thinking Process, students begin to shape their leadership skills within their classroom learning community. 5th–8th GradesThe Intermediate program at Odyssey guides 5th-6th grade students into developing more sophisticated inquiry as they sharpen their academic skills and begin to specialize their engagement with academic content. As leaders, our 7th-8th grade students honor the complex journey of individuation, self-discovery, and finding one’s voice for justice in the world. Together, these students build community, center in themselves, harness language, and learn to hold what is sacred, to play, to design, and to think critically, creatively, and contextually. Odyssey High SchoolOdyssey High School is a small, hardworking community, in which students are inspired by life and are prepared to assume their authentic place within it. Our curriculum integrates real world applications with lectures, academic research, and creative expression. Our program exceeds NC University requirements, incorporates daily centering, mentoring, and Mysteries Council. Our graduates have gone on to college at Guilford, Pace, SCAD, UNC-Asheville, and Warren Wilson College, among others. Research skills are fostered at every grade level: enabling students at Odyssey Creative School to approach their learning with inquiry, perspective, and passion. Teachers guide students’ natural curiosity to drive thematic investigation in their classrooms; introducing students to 21st century research skills and preparing them to become informed and engaged citizens. In elementary classrooms, students learn in integrated thematic units: exploring a region of the world or a concept by exploring subject areas as they relate to the content. For example, a class study of bird might study the physics of flight, the geography of flight patterns, the mathematics of population, and the mythology of birds in literature. Beginning in kindergarten and continuing every year through high school graduation, students conduct a long-term Independent Research Project at Odyssey. These are not science projects! They are well-developed mastery of a passion: Students explore ideas, investigate their topic, and report and present on their experience, developing a range of professional skills from interviewing to research methods to public speaking. Design Thinking is at the forefront of our curriculum. In kindergarten through fifth grade, the design-thinking process is integrated into daily lessons: integrating scientific and creative processes to inspire students to be able to create the change they want to see in the world. Reflection is an essential component in the development of metacognition and mastery in whatever field and on whatever path our students walk into adulthood. Odyssey’s Integral Education creates space for the skill of honest and deep reflection on the learning process at every step. In social-emotional learning, students reflect independently, within small groups, and as whole classrooms to develop emotional literacy, active listening, compassionate communication, and conflict-resolution skills. Parents are offered four conferences throughout the year to meet and reflect on student learning in each of the strands: mental, emotional, aesthetic, moral, and physical. The first conference is led by parents, who reflect and share their family’s goals, strategies, and challenges. The second is led by teachers, who reflect on their observations of each child’s progress in meeting goals. The third conference is led by students, who reflect on learning artifacts they’ve collected, their growth, and their goals for the end of the year. The fourth is a chance for parents, students, and teachers to gather and reflect on achievements and challenges to prepare for the coming school year.
Asthma affects an estimated 7 million children and causes significant health care and disease burden. The most recent iteration of the National Heart, Lung and Blood Institute asthma guidelines, the Expert Panel Report 3, emphasizes the assessment and monitoring of asthma control in the management of asthma. Asthma control refers to the degree to which the manifestations of asthma are minimized by therapeutic interventions and the goals of therapy are met. Although assessment of asthma severity is used to guide initiation of therapy, monitoring of asthma control helps determine whether therapy should be maintained or adjusted. The nuances of estimation of asthma control include understanding concepts of current impairment and future risk and incorporating their measurement into clinical practice. Impairment is assessed on the basis of frequency and intensity of symptoms, variations in lung function, and limitations of daily activities. “Risk” refers to the likelihood of exacerbations, progressive loss of lung function, or adverse effects from medications. Currently available ambulatory tools to measure asthma control range are subjective measures, such as patient-reported composite asthma control score instruments or objective measures of lung function, airway hyperreactivity, and biomarkers. Because asthma control exhibits short- and long-term variability, health care providers need to be vigilant regarding the fluctuations in the factors that can create discordance between subjective and objective assessment of asthma control. Familiarity with the properties, application, and relative value of these measures will enable health care providers to choose the optimal set of measures that will adhere to national standards of care and ensure delivery of high-quality care customized to their patients. - ACT — - Asthma Control Test - ACQ — - Asthma Control Questionnaire - AHR — - airway hyperresponsiveness - ATAQ — - Asthma Therapy Assessment Questionnaire - ATS/ERS — - American Thoracic Society/European Respiratory Society - C-ACT — - Childhood Asthma Control - EPR3 — - Expert Panel Report 3 - FENO — - fractional exhaled nitric oxide - FEV1 — - forced expiratory volume in 1 second - FEF25–75 — - forced expiratory flow between 25% and 75% of vital capacity - FEV1/FVC ratio — - ratio of forced expiratory volume in 1 second to forced expiratory volume - FVC — - forced expiratory volume - PEF — - peak flow - TRACK — - Test for Respiratory and Asthma Control in Kids Guidelines from the National Heart, Lung and Blood Institute for the diagnosis and management of asthma, and the Global Initiative for Asthma Control, revolve around the yardstick of evaluation of the severity of asthma and attainment of control to guide initiation and adjustment of therapy.1,2 Numerous studies have confirmed the inadequacy of asthma control in the United States.3,4 The domains of severity and control can be assessed in terms of impairment (frequency and intensity of symptoms, variations in lung function, and limitations of daily activities) and future risk (likelihood of exacerbations, progressive loss of lung function, or adverse effects from medications). Asthma can be considered to be well controlled if symptoms are present twice a week or less; rescue bronchodilator medication is used twice a week or less; there is no nocturnal or early awakening; there are no limitations of work, school, or exercise; and the peak flow (PEF)/forced expiratory volume in 1 second (FEV1) is normal or at the personal best. Asthma control can be further classified as well controlled, not well controlled, and very poorly controlled as elegantly laid out in the National Heart, Lung and Blood Institute Expert Panel Report 3 (EPR3).1 Asthma can be considered not well controlled if symptoms are present more than 2 days a week or multiple times on 2 or fewer days per week; rescue bronchodilator medication is used more than 2 days per week; nighttime awakenings are 2 times a month or more; there is some limitation of work, school, or exercise; and the PEF/FEV1 is 60% to 80% of personal best/predicted, respectively. Asthma is classified as very poorly controlled if symptoms are present throughout the day; rescue bronchodilator medication is used several times per day; nighttime awakenings are more than 1 time a week; there is extreme limitation of work, school, or exercise; and the PEF/FEV1 is less than 60% of personal best/predicted, respectively. The keystone of asthma management is the achievement and maintenance of optimal asthma control. However, to date, there is no universally recognized gold standard measure of asthma control that can accurately capture both patient-reported domains of impairment and risk and objective measures of lung function. The tools available in a clinical practice setting can be classified as subjective (“patient reported”) and objective (“physiologic and inflammatory measures”). A judicious combination of measures from each category may be needed to optimally assess asthma control. Subjective measures of asthma control include (1) detailed history taking, (2) use of composite asthma control scores, and (3) quality-of-life measures (used mainly in research settings). Assessment of asthma control in the health care provider’s office starts with the history. Detailed information should be sought on patient-centered outcomes (such as asthma exacerbations in the past year and the limitations asthma imposes on the patient’s daily activities including sports and play), sleep disturbance, medication use (both daily controller and reliever medication), adherence to therapy, and comorbidities/factors that may complicate care.5 Composite Asthma Scores Patient-reported composite asthma control score instruments are attempts to capture the multidimensional nature of asthma control in a single numerical value. This enables the degree of asthma control to be compared across encounters. More than 17 composite instruments, each with at least 1 published validated study, are available.6 These instruments have comparable content and have been designed to measure asthma disease activity over a period of 1 to 4 weeks. Notably, none of them have been validated to assess an acute exacerbation (Table 1). Therefore, from a pediatric emergency medicine perspective, caution should be taken when using composite asthma score instruments during an acute exacerbation, as is typically encountered in the emergency department setting. The commonly used validated tools are the Asthma Control Test (ACT),7 the Childhood Asthma Control Test C-ACT,8 and the Asthma Control Questionnaire (ACQ).9 The ACT contains 5 items, with a recall window of 4 weeks. The C-ACT is for use in children 4 through 11 years of age and consists of 4 pictorial items and 3 verbal items that are scored by the children and parents, respectively. It has been reported that children tend to assess their asthma control to be significantly lower than their parents do. The Asthma Control Questionnaire (ACQ) contains 6 items with a recall window of 1 week, supplemented by percentage of predicted FEV1 measurement. The Test for Respiratory and Asthma Control in Kids (TRACK)10 is a 5-question caregiver-completed questionnaire that determines respiratory control in children 0 to 5 years of age with symptoms consistent with asthma. Another less commonly used instrument is the Asthma Therapy Assessment Questionnaire (ATAQ), a 20-item parent-completed questionnaire exploring several domains, with 4 questions relating to symptom control and primarily used in research.11,12 Individual instruments contain 3 to 10 questions, and scoring varies by instrument (Table 1). Four instruments have established cutoff values for uncontrolled versus controlled asthma (ACQ, ACT, C-ACT, and TRACK), and 2 have cutoffs for identifying poorly controlled asthma (ACT and ATAQ). Because these cutoffs have been defined at a population level, they may not be accurate for an individual patient. Tracking the numerical and categorical responses over time for each individual patient may prove to be more helpful than looking at cutoff values alone. For instance, if a patient reports frequent nocturnal awakenings, following the response to that particular question may help individualize attainment of control. The minimal clinically important differences or temporal differences in scores that indicate clinical significance have been determined for a few of the instruments (ACQ, ACT, C-ACT, and TRACK6,13; Table 1). Three of the instruments (ACQ, ACT, and TRACK) have been validated in Spanish-speaking groups.14–16 The ACQ and ACT have been validated for use as self-administered instruments in person, at home, by telephone, and by Internet tracking.6,17 Poor asthma control, as measured by the commonly used composite scores, is associated with reduced lung function and elevated exhaled nitric oxide fraction5,18 (discussed later in the article). Studies have shown that changes in these composite scores reflect changes in the overall clinical assessment of asthma control by physicians and the need to step-up therapy.19 However, a recent study showed that the degree of asthma control, as assessed by these tools, changes over time and shows variable concordance with the risk of exacerbations.12 Despite being fairly well validated, these scores share drawbacks that limit their usefulness in clinical practice.6 Although the short recall window facilitates reliable recollection of recent asthma events, it fails to represent the fluctuations in control. Children may be excellently controlled during one season and then have poor control during another. In addition, asthma exacerbations can occur in children with good short-term asthma control.20 Exacerbations, an important component of the impairment domain of asthma control, are not covered in the ACT, C-ACT, and ACQ but are assessed in the TRACK and the Composite Asthma Severity Index.21,22 Quality of Life A range of pediatric asthma quality-of-life instruments have been developed, encompassing the impact of asthma on children’s or their parents’ lives.23 The instruments have been validated but are time-intensive to fill out and are therefore not routinely used in clinical practice. Currently available objective measures of asthma control include (1) assessment of lung function, (2) evaluation of airway hyperresponsiveness, and (3) biomarkers. Assessment of Lung Function The PEF is defined as the highest instantaneous expiratory flow achieved during a maximal forced expiratory maneuver starting at total lung capacity.24 PEF variability is the degree to which the PEF varies among multiple measurements performed over time (Table 2). The management of acute exacerbations has traditionally been guided by PEF measurements. However, the correlation between PEF and FEV1 worsens in asthmatic patients with airflow limitation. Also, although reference to normal PEF values is important, the “personal best” value, and the trend of change in individual patients, is of greater value in managing their asthma.24 The advantages of PEF are that it is easier to perform than a spirometric maneuver and it is measurable with a relatively small and inexpensive instrument. Thus, PEF may be suitable for individual testing at home, at school, and in patients who are poor perceivers of their degree of airway obstruction. It may help prevent delayed treatment in underperceivers and excessive use of services in overperceivers. Many concerns regarding PEF have been described, with the primary ones being that the results are highly variable even when performed well, limiting its utility in the diagnosis and management of asthma. Parents and child should be appropriately trained on use, but there is no gauge of effort, and it gives no information regarding the site of airflow obstruction. It cannot distinguish obstructive from restrictive ventilatory impairment. PEF meters from different manufacturers may show different results, and the “personal best” measurements may change with growth and degree of asthma control. Adherence to PEF monitoring is a challenge25 and is often the reason it is not widely used in clinical practice. Overall, PEF monitoring alone has not been shown to be more effective than symptom monitoring on influencing asthma outcomes26 and is no longer recommended.1 Measurement of spirometric indices of lung function, such as the FEV1, forced vital capacity (FVC), and FEV1/FVC ratio, are an integral part of the assessment of asthma severity, control, and response to treatment.1,2 They have been shown to be associated with the risk of asthma attacks in children.27 Children with chronic airway obstruction have been reported to be less likely to perceive dyspnea than those with acute obstruction.28 The EPR3, therefore, recommends performing office-based spirometry every 1 to 2 years and more frequently if clinically indicated in children 5 years or older with asthma.1 However, only 20% to 40% of primary care providers use lung function measurements in asymptomatic asthmatic patients, and up to 59% of pediatricians never perform lung function tests.29 Normal values for spirometry are well established and are based on height, age, sex, and race/ethnicity of the healthy US population. Spirometric measures are highly reproducible within testing sessions in approximately 75% of children older than 5 to 6 years of age. Guidance on performing spirometry in an office setting and coding for asthma visits have been described.30 The forced expiratory maneuver may be displayed as a flow-volume loop. Guidelines regarding interpretation of the primary measures (FEV1, FVC, and the FEV1/FVC ratio) are well outlined in the EPR3.1,31 Of note, most automatic interpretations of the spirometry report fail to comment on the FEV1/FVC ratio, an important parameter that, in children, is normally 85% predicted or greater.1 Forced expiratory flow between 25% and 75% of vital capacity (FEF25–75) may reflect obstructive changes that occur in the small airways of children with asthma. However, FEF25–75 is considered to be of secondary importance because it is not specific and is highly variable (effort dependent). Reduced spirometric measures are associated with symptom severity, reduced quality of life, and poor asthma outcomes.24 However, individual patients, particularly children, may have misleadingly normal spirometry results, despite frequent or severe symptoms. An analysis of 2728 children between 4 and 18 years of age attending a tertiary care facility showed that the majority of asthmatic children had FEV1 values within normal ranges.32 Spirometry, by itself, is not useful in establishing the diagnosis of asthma because airflow limitation may be mild or absent, particularly in children. In other words, if the spirometry result is normal, it does not rule out asthma. Variability of airflow obstruction over time and the response to treatment, when clinically relevant, can aid in the diagnosis and assessment of asthma control. Although there are organizations that are attempting to integrate spirometry results into the electronic health record with varying degrees of success, the most commonly used approach at this time is to scan the printed spirometry result into the electronic health record. Prebronchodilator and Postbronchodilator Spirometry (Bronchodilator Reversibility) Bronchodilator reversibility testing helps determine the presence and magnitude of reversible airflow limitation.24 Baseline spirometry is performed and repeated after administration of bronchodilator test agents (eg, 15 minutes after 4 inhalations of albuterol). Change in FEV1 is the most common parameter followed because the value of reversibility in other measurements is less established (eg, FEV1/FVC or FEF25–75). The most widely used definition of “significant” bronchodilator response is that of the American Thoracic Society/European Respiratory Society (ATS/ERS) guidelines for interpretation of spirometry and consists of an improvement in FEV1 greater than 12% and 200 mL.33 Other parameters that have been used in children include a 9% to 10% increase in percent predicted FEV1.24 Bronchodilator reversibility testing, although not specific, is useful for confirming the diagnosis of asthma. Increased bronchodilator reversibility correlates with increased asthma severity. Bronchodilator reversibility is diminished in patients with well-controlled asthma as well as those with narrowing or remodeling of the airways. Annual assessment of prebronchodilator and postbronchodilator FEV1 might help identify children at risk for developing progressive decline in airflow.34 Recent Advances in Monitoring PEF and Spirometry Advances in home-based airflow monitoring include the use of electronic, handheld devices with easily downloadable recordings of multiple PEF or FEV1 point measures with software that facilitates easy use and interpretation.35 The availability of these instruments for routine clinical use is limited at this time. Impulse oscillometry assesses airflow resistance and bronchodilator response in younger children. Measurement of airway resistance is a direct indicator of airway caliber with increased resistance indicating narrowing of airways. It is used largely as a research tool and is only available in a few centers.24 A major characteristic of asthma is the variability in bronchial tone in response to a variety of stimuli. Airway hyperresponsiveness (AHR) may be assessed by bronchial provocation tests. Bronchial provocation tests may be performed with agents such as methacholine or stimuli such as physical exercise.24,28,36 A positive test result for AHR is indicated by a 20% reduction in FEV1 after inhalation of a methacholine dose of 8 mg/mL or less. A negative test suggests a diagnosis other than asthma. A reduction in FEV1 of at least 10% during exercise testing is taken as a sign of exercise-induced bronchoconstriction. These tests take approximately 2 hours and require trained personnel to perform them. In general, evidence does not support the routine assessment of AHR in the clinical management of asthma control.28 Apart from exhaled nitric oxide measurements, the role and usefulness of noninvasive biomarkers in routine clinical practice for monitoring inflammation in children with asthma is undefined. Sputum eosinophilia, exhaled breath condensates, and urinary leukotrienes are used as tools primarily in research studies.28,37 Exhaled Nitric Oxide The fractional concentration of nitric oxide in exhaled air (FENO) is a quantitative measure of airway nitric oxide, an endogenously produced gaseous mediator that is an indirect marker of airway inflammation. The joint ATS/ERS guideline for the measurement of FENO is the current standard.38,39 The testing is noninvasive, reproducible, easy to perform in patients (including children), feasible to measure in ambulatory clinical settings, and has no risk to patients.40,41 FENO is generally accepted as a marker of eosinophilic airway inflammation. Individuals with asthma have been reported to have elevated levels of FENO, but because FENO is also related to atopy, elevated levels may be seen in atopic individuals without asthma. Although FENO levels overlap among healthy, atopic, and asthmatic cohorts, in general, the upper value of normal is 25 ppb. It has been suggested that a clinically important decrease of FENO is a change of 20% for values greater than 50 ppb or a change of 10 ppb for values less than 50 ppb.38 Studies in children suggest that FENO correlates with severity and with asthma control.42 FENO reduces in a dose-dependent manner with corticosteroid treatment43 and has been shown to increase with deterioration in asthma control.44 The value of additional FENO monitoring in children whose asthma is appropriately managed using guideline-based strategies is unproven,28,45–47 and insurance payment for this test varies by geographic location. Nevertheless, some asthma specialists have adopted the use of FENO as an adjunct ambulatory clinical tool for measuring airway inflammation and serial monitoring asthma control in individual patients with difficult-to-control asthma. Assessing Asthma Control in Children Younger Than 5 Years In children younger than 5 years, it is recommended that both symptom control and future risk be monitored.2 The risk domain is assessed by historical review of exacerbations with need for oral steroid. Validated measures to assess asthma control in this age group include the TRACK (0–5 years) and the C-ACT in children (4–11 years) of age. Children younger than 5 years are typically unable to perform spirometry; hence, confirmation of the diagnosis of asthma is challenging in this age group. Recurrent wheezing occurs in a large proportion of these children, typically with viral infections. A therapeutic trial of regular controller therapy (for 1–3 months) may often be necessary to evaluate response and maintenance of control. Assessment of risk profiles using tools such as the asthma predictive index (API) may be helpful in predicting the likelihood of recurrent wheezing in school-age children. One study showed that children with a positive API had a fourfold to 10-fold greater chance of developing asthma at 6 through 13 years of age than those with a negative API, and 95% of children with a negative API remained free of asthma.48 The modified API suggests that the diagnosis of asthma in young children with a history of more than 3 episodes of wheezing is more likely if they meet 1 major or 2 minor criteria.49 Major criteria include a parent with asthma, physician diagnosis of atopic dermatitis, or sensitization to aeroallergens (positive skin or allergen-specific immunoglobulin E test results). Minor criteria include the presence of food allergies or sensitization to milk, egg, and peanut; blood eosinophil counts greater than 4%; or wheezing apart from colds.49 Recent advances in measuring lung function, biomarker profiles, adherence, utilization and outcomes data, and development of validated questionnaires have made ongoing assessment and monitoring of asthma control a reality. Following is a schema of suggested measures that may be used in routine ambulatory monitoring of asthma control in clinical practice. The encounter between patient and health care provider may involve critical and empathetic listening to the patient and accurate elicitation of symptoms as indicators for asthma control, aided by validated asthma control tools such as the C-ACT/ACT. A complete environmental and social history should be obtained to evaluate for triggers.50 Airway obstruction and AHR can be assessed by measuring prebronchodilator and postbronchodilator FEV1. Some specialists may consider evaluation of airway inflammation by using FENO to be useful. Education and training regarding asthma and its management can be provided, taking into consideration the patient’s personal preference and goals while creating an individualized action plan. Action strategies can be based on either symptoms or objective criteria, such as by monthly monitoring of the age-specific, validated asthma control instrument, or in individualized circumstances, by daily electronic FEV1 or conventional peak flow monitoring at home. Symptom scores with validated control instruments and FEV1 can be monitored at subsequent visits along with serial health care utilization data to tailor the medication dose to degree of asthma control. The risk domain is validated by a history of systemic steroid prescription, emergency department visits, or hospitalizations. In individuals whose FENO was elevated at the initial visit and shows variation in response to therapy, repeat FENO monitoring may be considered. Education regarding asthma triggers, review of inhaler techniques, assessment and reinforcement of adherence, treatment of comorbidities (eg, gastroesophageal reflux, sinusitis, obesity), and encouragement and fortification of the collaborative provider-patient relationship can be provided at each follow-up visit. The need for continued assessment or reassessment by a pediatric allergist or pulmonologist can be considered when faced with challenges in attaining optimal asthma control. Information on appropriate coding for the asthma management tools and services provided can be found in the Asthma Coding Fact Sheet at the following link: https://www.aap.org/asthmacodingfactsheets. Chitra Dinakar, MD, FAAP Bradley Chipps, MD, PhD, FAAP Section on Allergy and Immunology Executive Committee, 2015–2016 Elizabeth C. Matsui, MD, MHS, FAAP, Chair Stuart L. Abramson, MD, PhD, AE-C, FAAP Chitra Dinakar, MD, FAAP Anne-Marie Irani, MD, FAAP Jennifer S. Kim, MD, FAAP Todd A. Mahr, MD, FAAP, Immediate Past Chair Michael Pistiner, MD, FAAP Julie Wang, MD, FAAP Former Executive Committee Members Thomas A. Fleisher, MD, FAAP Scott H. Sicherer, MD, FAAP Paul V. Williams, MD, FAAP Debra L. Burrowes, MHA Section on Pediatric Pulmonology and Sleep Medicine Executive Committee, 2015–2016 Julie P. Katkin, MD, FAAP, Chair Kristin N. Van Hook, MD, FAAP Lee J. Brooks, MD, FAAP Bonnie B. Hudak, MD, FAAP Richard M. Kravitz, MD, FAAP Shrutim Paranjape, MD, FAAP Michael S. Schechter, MD, FAAP, Immediate Past Chair Girish D. Sharma, MD, FAAP Dennis C. Stokes, MD, FAAP Laura Laskosz, MPH FINANCIAL DISCLOSURE: The authors have indicated they do not have a financial relationship relevant to this article to disclose. FUNDING: No external funding. POTENTIAL CONFLICT OF INTEREST: The authors have indicated they have no potential conflicts of interest to disclose. This document is copyrighted and is property of the American Academy of Pediatrics and its Board of Directors. All authors have filed conflict of interest statements with the American Academy of Pediatrics. Any conflicts have been resolved through a process approved by the Board of Directors. The American Academy of Pediatrics has neither solicited nor accepted any commercial involvement in the development of the content of this publication. Clinical reports from the American Academy of Pediatrics benefit from expertise and resources of liaisons and internal (AAP) and external reviewers. However, clinical reports from the American Academy of Pediatrics may not reflect the views of the liaisons or the organizations or government agencies that they represent. The guidance in this report does not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. All clinical reports from the American Academy of Pediatrics automatically expire 5 years after publication unless reaffirmed, revised, or retired at or before that time. - Global Strategy for Asthma Management and Prevention, Global Initiative for Asthma (GINA) 2015 Update - Fuhlbrigge AL, - Adams RJ, - Guilbert TW, et al - Carlton BG, - Lucas DO, - Ellis EF, - Conboy-Ellis K, - Shoheiber O, - Stempel DA - Brand PL, - Mäkelä MJ, - Szefler SJ, - Frischer T, - Price D; ERS Task Force Monitoring Asthma in Children - Juniper EF, - Gruffydd-Jones K, - Ward S, - Svensson K - Chipps B, - Zeiger RS, - Murphy K, et al - Kamps AW, - Roorda RJ, - Brand PL - Moeller A, - Carlsen KH, - Sly PD, et al; ERS Task Force Monitoring Asthma in Children - Dombkowski KJ, - Hassan F, - Wasilevich EA, - Clark SJ - American Academy of Pediatrics - Miller MR, - Hankinson J, - Brusasco V, et al; ATS/ERS Task Force - Horak E, - Lanigan A, - Roberts M, et al - Crapo RO, - Casaburi R, - Coates AL, et al - Dweik RA, - Boggs PB, - Erzurum SC, et al; American Thoracic Society Committee on Interpretation of Exhaled Nitric Oxide Levels (FENO) for Clinical Applications - AAAAI/ACAAI Joint Statement of Support of the ATS Clinical Practice Guideline - Kharitonov SA, - Donnelly LE, - Montuschi P, - Corradi M, - Collins JV, - Barnes PJ - Petsky HL, - Cates CJ, - Li A, - Kynaston JA, - Turner C, - Chang AB - Petsky HL, - Cates CJ, - Lasserson TJ, et al - Matsui E, - Abramson S, - Sandel M; American Academy of Pediatrics, Section on Allergy and Immunology - Copyright © 2017 by the American Academy of Pediatrics
Last week, I searched that Font of All Wisdom, the internet for a derivation of the variance of the Poisson probability distribution. The Poisson probability distribution is a useful model for predicting the probability that a specific number of events that occur, in the long run, at rate λ, will in fact occur during the time period given in λ. For instance, let’s say that you are waiting at a bus stop for a bus that is known to come at an average rate of λ equals once per hour. But there is no schedule, and you are not guaranteed that the bus will come by exactly once in the next hour. It might come by two or more times, or it might not come by at all. So what is the probability that it comes by k times in the next hour? The probability is given by: where “e” is an irrational number that equals approximately 2.718, and the exclamation point is the factorial function, where k! = k(k-1)(k-2)…(2)(1), all multiplied together. (Note: 0! = 1, for some strange reason.) Wikipedia has an acceptable derivation of this formula, so I will not reproduce it here. Applying this formula to our example, the chance that a bus whose average arrival rate is λ = 1 visit/hour will not come by in the next hour is P[#visits = 0] = 10e-1/0! = e-1 = 0.368 = 36.8%. The chance that it will come by exactly once in the next hour would be P[#visits = 1] = 11e-1/1! = 36.8% as well. The chance that the bus will come by exactly twice is P[#visits = 2] = 12e-1/2! = 18.3%. These probabilities continue to diminish for increasing numbers of busses predicted to come by in the next hour. As you might expect, the sum of these probabilities = 100%; you are guaranteed that zero or more busses will visit your bus stop in the next hour! A more useful probability would be, what is the probability that one or more busses will come by in the next hour? Because the probabilities sum to 100%, the math can be done thusly: P[#visits ≥ 1] = 100% - P[#visits = 0] = 100% - 36.8% = 63.2%. But what if you don’t want to wait an hour? What if you want to catch a bus in the next 15 minutes? This can be easily done by subdividing the rate λ. A bus whose average visit rate is once per hour also has a average visit rate of ¼ visits per 15 minutes. Again to our formula: P[#visits ≥ 1|λ = ¼] = 100% - P[#visits = 0|λ = ¼] = 0.250e-0.25/0! = e-0.25 = 100% - 77.9% = 22.1%. Considerably lower probability for a visit in the next 15 minutes. What if we want to know the average number of busses that will come by in the next hour. Answer: duh! We already told you that the busses come by at rate λ = 1 bus per hour! But let’s derive this from first principles. To start, we must remember the Taylor Series expansion for e: Because the factorial function is undefined for arguments less than zero, the summation is independent of indices less than zero. Thus: etc. We also need the definition of the mean, or expected value μ = E(N) of a quantity N whose distribution is defined by P(n). This definition is: μ = E(N) = ∑nP(n) for all n. We can apply this formula to calculate the mean of the Poisson distribution: As expected. But what about the variance? The variance is a measure of how far a typical result will depart from the average result. In our example, for instance, we know that, on average, we can expect one bus to visit the bus stop each hour. But that number could be zero busses, or it could be multiple busses. So how much variation can we expect? The standard deviation σ is the average that we can expect a particular number of busses to vary from the average number of busses. The variance is the square of the standard deviation, or σ2. It is defined thus: σ2 = E[(N – μ)2 = ∑(n – μ)2P(n) for all n. Applying this formula to the Poisson distribution: So, the variance of a Poisson distribution with average rate λ is . . . λ. BLEG: I created the post above on my Dell laptop. I composed the equations in MS Word 2003's MathType, and saved the document as an HTML file. This created the .gif files containing the equations that you see above. But when I tried follow the same process using my desktop computer, When I tried to create the webpage using the exact same methodology, except on my desktop, the equations came out looking like this: Relatively ugly. Clearly, there is some setting that is set correctly on my laptop that is set incorrectly on my desktop, but I don't know what it is. Please help.
A Brain Changes Over Time: Make Yours Count The results have revealed that most of the connections in children's brains are formed between regions of the cortex that are physically close to each other. Conversely, in adults' brain, most such paths are created between distant brain regions, which are not functionally linked to each other. The scientists have also determined that kids have a very reduced number of long-distance brain connections, as opposed to adults, where this form of interaction is the standard. _SoftpediaThe explanation sounds simple, but the reality is far more complex. The brain is plastic throughout life, but particularly during early childhood and adolescence. The mechanism for brain plasticity is defined genetically, and influenced by the brains environment in the womb and in infancy. The specifics of micro-plasticity of particular brain regions are strongly influenced by environment in childhood and adolescence. If a child learns a second language, is trained in music, or studies martial arts, for example, the brain organises itself differently. Throughout childhood, myelination of brain pathways occurs from back to front -- from occipital to pre-frontal. The famous "developmental windows" of a child's brain correspond roughly to periods of regional myelination. A person's pre-frontal myelination may not be completed until the age of 25 or 30, suggesting that that person's brain is not fully functional (lacking in judgment and perspective) prior to that age. But in order to acquire the ability to communicate "by long distance", brain centers need more than well-myelinated neural connections. They need to learn the neural language of synchronous oscillations, in order for enough complementary brain centers to be able to interact simultaneously to create rich and textured perceptions and conceptualisations. Closer brain centers tend to communicate via beta wave synchrony. Farther centers tend to use gamma wave synchrony. The significance of these distinctions is still being worked out. Younger brains can certainly be as "intelligent" as older brains, in terms of IQ scores. But it is unlikely for a young brain to be able to achieve the complexity or efficiency of thought and action that an older brain can display, to say nothing of wisdom and perspective. Part of the difference may well be due to lack of experience / knowledge. Part may well be due to differences in myelination and underlying micro-connectivity. Here is the huge problem we are facing as a society: modern educational and child-rearing methods are permanently handicapping the brains of our young, by missing critical "developmental windows". Large chunks of entire generations have been lost so far due to our societal dysfunction in this regard. And there are no signs that society is waking up -- quite the opposite in fact. A societal wide shortage of competence is likely to be the result. As baby boomers start retiring, critical skills shortages will grow more acute. Besides being fewer in number, subsequent generations have been damaged more thoroughly by the wave of social engineering that hit university schools of education in the late 60s and early 70s. This damage to young minds is not slowing down, but is growing worse. Under the Obama / Pelosi reich -- which is most beholden to teachers' unions and more radical academics -- the brain killing machine will only grow larger and stronger. In order to create a core of competent individuals and communities that will be able to best take advantage of the coming breakthroughs in genetics, neuroscience, and psycho-philosophy, it is important that large numbers of parents opt out of government education and other brain-stunting influences of popular and mainstream culture. As an atheist, I choose to be neither pro-religion nor anti-religion in the coming cultural schisms. There are larger things at stake.
How’s this for mad science: Within the next 15 years, people could elect to have their brains “zapped” to boost creativity in the workplace or classroom. The process -- based on functional MRI studies -- is headed up by Adam Green, director of the Georgetown Laboratory for Relational Cognition and president-elect of the Society for the Neuroscience of Creativity. Green’s team looked at blood flow as a measure of brain cell activity when people were doing creative tasks. The process pointed them to one region of the brain in particular (the frontopolar cortex), so they decided to test whether stimulating the area could make creative thinking easier. “We zap people’s brains in a targeted way based on these fMRI studies,” Green says. The researchers hope to make creative neuroscience more available to the general public down the line. If you don’t have a brain-stimulation tool and are looking to think outside the box, good news: We’ve got research-backed tips for upping your creativity outside the lab. Here’s how. 1. Exercise your creativity like a muscle One surefire way to boost creative thinking: Try. No, really! “Creativity isn’t made out of a magical fairy part of the brain,” Green says. “It’s essentially using all the same tools that go into doing everything else … but applying those tools in creativity-specific ways.” Research shows that when people try to think more creatively, they almost always can -- and those effects are both significant and repeatable. Green points to an “age-old adage” in neuroscience that “cells that fire together, wire together.” The idea is that the more you use your brain to do something, the stronger the connections between the cells involved become. Try implementing this in your everyday routine by dedicating specific time to think creatively -- and reminding yourself to do so before any brainstorming session. 2. Change up your surroundings -- even minimally “The best trick I know isn’t very sexy,” Green says. Data support that creativity “nudges” can come from changes as small as a warmer cup of coffee or different colors in the room. Try switching out some of the items on your desk, orienting yourself differently or doing an overhaul of the bulletin board you sit facing. Know that those “nudges” don’t only pertain to your physical surroundings -- they’re also connected to your social setting. Take advantage of opportunities to periodically work in different areas of the office, sit with new colleagues or invite people from different departments to lunch. Although you might not have much control over your work environment, making any possible adjustments could translate to a significant creativity boost. “You want your physical and social surroundings to change,” says Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology. “If it’s the same old stuff on the walls and your desk -- and the same people you’re talking to -- that’s not necessarily good for creativity.” 3. Go out on a limb with what you learn “New ideas come from interconnections among old ideas,” says Epstein, who uses an exercise called “the experts game” to demonstrate this. In it, a few people in a group with extensive knowledge of an obscure topic give five-minute lectures. Then, after learning about topics such as how shoes are constructed or the history of Rolex watches, everyone comes up with at least three ideas for new products or services. “It is really mind-boggling what people will come up with, and that’s based on 15 minutes of instruction they just received,” Epstein says. You can DIY this approach by asking friends or colleagues in different industries about what they do -- or signing up for a course on something completely unfamiliar to you via sites such as Khan Academy, Coursera or Massive Open Online Courses (MOOC). There’s a good chance it won’t be immediately apparent how what you’re learning could be useful in the future, but the pieces of knowledge you’re collecting should come together naturally when you’re faced with a certain challenge or brainstorming ideas later on. “The more interesting and diverse the pieces, the more interesting the interconnections,” Epstein says. 4. Pay attention to -- and record -- new ideas that come to you As people age, the number of creative ideas that come to them doesn’t necessarily slow, but they tend to capture fewer of them. When an idea -- or a small component of an idea -- comes to you, start making it a point to preserve it. Jot it down in a smartphone note, write it in a pocket-sized notebook you carry around or sketch it on a napkin. “Capture now, evaluate later,” says Epstein, who says his research has shown over and over again that capturing your new ideas is likely the most valuable aspect of boosting creativity. 5. Challenge yourself in new ways -- especially when it comes to overarching issues in your industry If you’ve ever tried an “escape room” -- a physical adventure game where players complete goals by solving puzzles -- your creativity probably spiked. That’s because challenges act as a catalyst for us to think creatively and come up with simultaneous ideas or solutions. For example, if you turn a knob and find out a door is locked, you begin to automatically brainstorm ideas and solutions -- jiggling the knob, pounding on the door, trying your luck with a bobby pin. You can stimulate yourself similarly at work by setting a time limit for a task or taking on an “ultimate challenge” in your industry, Epstein says. Think about the overarching issues and questions in your field (How do I end world hunger in one week? How can I invent a phone that doesn’t require a charger?) and practice brainstorming open-ended solutions. Copyright © 2018 Entrepreneur Media, Inc. All rights reserved. This article originally appeared on Entrepreneur.com. Minor edits have been done by the Entrepreneur.com.ph editors
Description: The Canarian Ivy is an evergreen perennial shrub native to the Canarian Islands as well as parts of North Africa. It is a climber, using its aerial rootlets to cling onto surfaces and is therefore often grown to cover walls and similar structures. Size: The vine can reach heights of 20 - 40 metres if it grows in a favourable condition where sufficient climbing space is available. Ivy climbing up an electricity pole. Note the fine rootlets stretching ahead of the main plant body. Conditions:Hedera canariensis thrives best in fertile, moist soil and with plenty of sunlight. Detail of the leaves. Propagation: The Canarian Ivy spreads by its seeds, which are eaten by birds and spread widely. Ivy completely covering an otherwise bare pole. When to plant: If grown domestically, it is recommended to plant Ivy in early spring, after the last frosts. Provide the Ivy with ample climbing opportunities, such as by planting it on the North/East facing side of a high wall, or similar.
Electron counting rules, as represented by the Octet Rule, are the practical tool not only to comprehend various nature of molecules synthesized in the lab, but also to predict the properties of molecules that have yet to be isolated. Cluster—a number of element centres grouped close together—is governed by an empirical set of electron counting rules: the polyhedral skeletal electron pair theory (PSEPT), also known as Wade-Mingos’ Rules. These rules account for a relationship between the geometry of a cluster and the number of electron pairs involved in the skeletal framework of the cluster. While almost all of the reported carborane clusters are found to be three-dimensional in accordance with the Wade-Mingos’ Rules, it has been computationally predicted that by increasing the number of skeletal electron pairs, two-dimensional carborane molecule would be formed as the most stable configuration. This ‘flattening’ of carborane, however, has never been achieved in a laboratory environment thus far. Recently, the research group led by Prof. Rei KINJO at Nanyang Technological University-Singapore has developed a 'B4C2R4’ carborane by installing two carbon atoms from isonitriles into a pre-formed tetra atomic boron (B4) core. An X-ray diffraction analysis unambiguously revealed that the skeletal B4C2 framework exhibits an unprecedented planar structure, which is beyond the three-dimensional cage structure predicted with the Wade-Mingos Rules. More interestingly, a closer look at the bonding and electronic structure within the B4C2 framework, with the help of computational investigations, confirms the aromatic nature of the B4C2R4 carborane. According to the criteria outlined in the (4N + 2) Hückel’s Rule, the B4C2R4 carborane possessing only 4π electrons must be classified as anti-aromatic. Nevertheless, this compound indeed exhibits the aromaticity, which is predominantly owing to the presence of the pronounced ‘local’ aromatic character in the sub-units, rather than the ‘overall’ anti-aromaticity. Thus, splitting the 4π-electron system into two 2π-electron systems may equip the molecule with an aromatic feature. In other words, “the (4N + 2) rule may not be applied to interpret the net aromaticity/antiaromaticity of flat carborane clusters involving multiple aromatic systems.” The results of this study have just been published on Nature Communications under the title of “A flat carborane with multiple aromaticity beyond Wade–Mingos’ rules”, (https://www.nature.com/articles/s41467-020-17166-9)
The ribcage is composed of twelve pairs of ribs which articulate with the vertebrae of the spinal column and the sternum (with some exceptions) to create the thoracic cavity. The ribs themselves are flat, curved bones, all of which articulate posteriorly with the vertebrae at costal facets. The upper seven pairs of ribs also articulate anteriorly with the sternum, while the lower five pairs do not. The eighth, ninth, and tenth pairs are attached by costal cartilage to the seventh pair, while the eleventh and twelfth pairs are not attached anteriorly at all. The ribcage provides a sturdy support for the thorax, protecting the heart, lungs, and other important internal organs. The arrangement of the ribs, however, allows the expansion of the ribcage, as during breathing. A large number of muscles and ligaments attach to the ribs which, with the flexibility of the ribcage, allow the thorax to be both supple and strong. The true ribs are the upper seven pairs in the ribcage. They are called true ribs because they articulate anteriorly to the sternum, distinguishing them from false ribs, which do not. The true ribs are also connected anteriorly to the spinal vertebrae. In both cases, the connection is cemented by costal cartilage. The lower five pairs of ribs are called false ribs, because they do not directly articulate with the sternum. Instead, the eighth, ninth, and tenth ribs are joined to the seventh rib by cartilaginous tissue. The eleventh and twelfth pairs of ribs (the last two pairs of false ribs) are also called floating ribs, because they do not connect anteriorly to any other rib or the sternum. Costal (rib) cartilage is the connective tissue which attaches the ribs anteriorly (in the front) to the sternum and posteriorly (in the back) to the vertebrae of the spine. The costal cartilage is similar to glue, but is more resilient, with a smooth, shiny texture. The vertebrae are irregularly shaped bones which stack together to form the spinal column. The vertebrae are connected together by ligaments and muscles which control the degree of flexibility of the spine. The vertebrae are cushioned from each other by cartilage disks which act as shock absorbers to protect the vertebrae in the spine. The vertebrae may be separate (cervical, thoracic, and lumbar vertebrae), semi-articulated (as in some coccygeal vertebrae), or fused (as in the sacrum and coccyx). The typical vertebra has a body of solid bony material, which supports the weight of the spine, and an arch, which forms the vertebral foramen. It is the adjoining vertebral foramina which creates a canal down through the spinal column which houses and protects the spinal cord. The thoracic vertebrae feature facets to which the ribs attach, called costal facets (because of their relation to the ribs). The sternum is a flat, blade-like bone located at the center of the chest. It serves as the anterior (forward) site of articulation for the ribs via cartilaginous connections, called costal cartilage. The pectoralis major also anchors to the sternum, giving the shoulder joint much of its strength during flexion of the arm. The sternum features two articulations in addition to its costal (rib) articulations. One of these, called the manubriosternal joint, is between the body (middle plate) of the sternum and the broader upper section, called the manubrium. The manubrium of the sternum articulates with the clavicles and the sternocleidomastoid, sternohyoid, and sternothyroid muscles connect here. The lower articulation is called the xiphisternal joint, and is between the body of the sternum and a small, teardrop-shaped bone called the xiphoid (pronounced "zy'-foid") process. The xiphoid process anchors the rectus abdominis, the transverse thoracic, and the diaphragm muscles, responsible for much of the muscular expansion and contraction of the abdomen. The xiphoid (pronounced "zy'-foid") process is a teardrop-shaped bone which articulates with the body of the sternum at the xiphisternal joint. This process anchors the rectus abdominis, the transverse thoracic, and the diaphragm muscles, responsible for much of the muscular expansion and contraction of the abdomen. The manubrium of the sternum is the broad, disc-like, upper part of the sternum. It articulates with the body of the sternum at the manubriosternal joint. The manubrium of the sternum articulates with the clavicles and the sternocleidomastoid, sternohyoid, and sternothyroid muscles connect here. The top of the manubrium features the small jugular notch which admits passage of the jugular vein along the bone. The jugular notch is located at the top of the manubrium of the sternum. This notch admits passage of the jugular vein along the bone. The clavicle, or collarbone, is a long, slightly curving bone which forms the frontal (anterior) part of each shoulder (pectoral) girdle. Located just above the first rib on each side of the ribcage, clavicles attach to the sternum in the middle of the chest, and laterally to the acromion of the scapula (forming the acromioclavicular joint). The scapula (shoulder blade) is a rougly triangular bone which, with the clavicle, forms the pectoral, or shoulder, girdle. The humerus, or upper arm bone, articulates with the scapula to form the shoulder joint. This articulation takes place at the glenoid cavity, located at the upper, lateral angle of the scapula. The posterior of the scapula features a laterally running spine ,which separates the posterior surface into two unequal areas. This spine continues laterally and projects in the coracoid process and the acromion (which articulates with the medial end of the clavicle). Both of these projections serve as sites of attachment for connective tissue, and the spine and acromion anchor the trapezius and deltoids, specifically. These connections give the pectoral girdle a high degree of both flexibility and strength.
Observations by a teacher: - If I really believe in a concept and want my students to get it, I'll repeat it a lot, sometimes in different words. - I do not lean on concepts that my students already understand. I tend to teach to the gaps in their understanding. - Luke was more than a simple stenographer: He grouped things together for a purpose, so we need to deal with parables in their context. They are not just little one-shot stories. - Jews were people who really did pray a lot—or at least they were used to being around praying people. The concept of prayer was certainly not new to them. First parable: The persistent widow and the unjust judge - Luke lets the cat out of the bag with this one. There's no real question why the parable was told, so we should ask what shortcoming the parable is addressing. - The widow had a legitimate claim. The judge never questioned whether her lawsuit was valid. - Especially in the culture of the time, the widow had no power or status. She couldn't force the judge, and presumably didn't have the money to bribe him. - Her persistence was what won. This is an example of the "how much more?" argument which is common in Scripture. If a terrible person (the judge isn't even honest) can be moved this way, how much more will your Heavenly Father give you what you ask if you persist? Second parable: The pharisee and the tax collector We dealt with this before, so a main question: Why is this here? We just had a parable about a lowly person with no status persuading an unjust judge; now we get a parable in which a notable sinner who is very aware of his sins is justified rather than the member of the religious elite; the next item following is the comment by Jesus that the kingdom of heaven belongs to people who receive the kingdom like a little child. The common theme here is that low-status persons who have no ability to force their way in are somehow the Father's favorites. Finally, the Lord's Prayer (Luke 11:1-13) We often lift this out and ignore its context. - Again, the disciples are asking Jesus to teach them to pray. Didn't they already know the liturgical Jewish prayers? They must have seen something in Jesus's praying (and John's) that didn't fit their expectations. - The basic instruction is followed by a short parable on the theme of persistence. Even though the request was unreasonable and inconvenient, the neighbor would fulfill it because of the persistence of the person doing the asking. - And again, the "How much more?" argument. If an annoyed neighbor will give you what you need just because you won't stop knocking on the door, how much more will your Heavenly Father give you what you ask for? And if you evil people will give your children good gifts, how much more will your Heavenly Father?
15% of WUI has a disability that significantly reduces their quality of life. What makes matters worse is the lack of accessibility and technology that caters to the needs of disabled people. Access for the differently-abled is a matter of justice and equality. Environmental limitations that make it difficult for differently-abled people to access buildings and transportation, education, housing, and employment opportunities only contribute toward the discrimination and marginalization faced by differently-abled people everyday. At DIDEPAS, we believe it is essential to increase awareness and recognize the need for universal designs that can make the lives of differently-abled people easier. We work with companies that are creating breakthrough technology and useful inventions that can change the way they connect with the world. Our vision is to create a world where people with emotional, physical, and cognitive disabilities don't have to depend on others and can transcend the limitations their disabilities place on them. We offer access to the latest developments for those struggling with mobility, hearing, speech, and sight. We are also working on increasing the awareness of the need for universal designs such as wheelchair ramps for buildings and public transport, wheelchair-accessible elevators, etc. The purpose of universal design is to prevent stigmatization of differently-abled persons, ensure equivalent means of use for everyone, and to accommodate individuals with a wide range of needs, abilities, and preferences. We also provide financial assistance for differently-abled adults, children,and families to help them get the necessary tools and equipment for enhanced quality of life such as motorized wheelchairs, cochlear implants, prosthetics, and more. For more information regarding universal designs and tools that increase accessibility for the differently-abled, get in touch with our team and become a part of our campaign for increasing the inclusion of differently-abled people in both social and workplace environments! Sign up to Newsletter
Research shows what you say about others says a lot about youAugust 2nd, 2010 in Medicine & Health / Psychology & Psychiatry How positively you see others is linked to how happy, kind-hearted and emotionally stable you are, according to new research by a Wake Forest University psychology professor. "Your perceptions of others reveal so much about your own personality," says Dustin Wood, assistant professor of psychology at Wake Forest and lead author of the study, about his findings. By asking study participants to each rate positive and negative characteristics of just three people, the researchers were able to find out important information about the rater's well-being, mental health, social attitudes and how they were judged by others. The study appears in the July issue of the Journal of Personality and Social Psychology. Peter Harms at the University of Nebraska and Simine Vazire of Washington University in St. Louis co-authored the study. The researchers found a person's tendency to describe others in positive terms is an important indicator of the positivity of the person's own personality traits. They discovered particularly strong associations between positively judging others and how enthusiastic, happy, kind-hearted, courteous, emotionally stable and capable the person describes oneself and is described by others. "Seeing others positively reveals our own positive traits," Wood says. The study also found that how positively you see other people shows how satisfied you are with your own life, and how much you are liked by others. In contrast, negative perceptions of others are linked to higher levels of narcissism and antisocial behavior. "A huge suite of negative personality traits are associated with viewing others negatively," Wood says. "The simple tendency to see people negatively indicates a greater likelihood of depression and various personality disorders." Given that negative perceptions of others may underlie several personality disorders, finding techniques to get people to see others more positively could promote the cessation of behavior patterns associated with several different personality disorders simultaneously, Wood says. This research suggests that when you ask someone to rate the personality of a particular coworker or acquaintance, you may learn as much about the rater providing the personality description as the person they are describing. The level of negativity the rater uses in describing the other person may indeed indicate that the other person has negative characteristics, but may also be a tip off that the rater is unhappy, disagreeable, neurotic—or has other negative personality traits. Raters in the study consisted of friends rating one another, college freshmen rating others they knew in their dormitories, and fraternity and sorority members rating others in their organization. In all samples, participants rated real people and the positivity of their ratings were found to be associated with the participant's own characteristics. By evaluating the raters and how they evaluated their peers again one year later, Wood found compelling evidence that how positively we tend to perceive others in our social environment is a highly stable trait that does not change substantially over time. Provided by Wake Forest University "Research shows what you say about others says a lot about you." August 2nd, 2010. http://phys.org/news199982319.html
The Blue Lake Rancheria (BLR) renewable energy microgrid received full permission to connect to the Pacific Gas & Electric grid on July 28, 2017. Designed and implemented by a team led by the Schatz Energy Research Center at Humboldt State University, this new microgrid powers critical infrastructure for the BLR tribal community and the Humboldt County region. A microgrid is an independent power generation and management system which can operate both while connected to (parallel) or disconnected from (islanded) the electric power grid. In the event of a power outage, a microgrid enters islanded mode and balances all power generation and electrical loads independent of the utility. The BLR microgrid breaks new ground in its seamless transition between grid-paralleled and grid-islanded states and by demonstrating stable islanded operation with a high percentage of renewable energy. This project heralds the first deployment of the Siemens Spectrum 7 based microgrid management system (MGMS) and the first multi-inverter Tesla battery energy storage system (BESS) utilized in a microgrid application. The MGMS and the BESS were integrated using foundational relay control programming developed at the Schatz Center. At 420 kWAC, the Rancheria’s PV array is also the largest installed in Humboldt County. The BLR microgrid has a total of 1.92 MW of generation capacity, including the PV array, a 500 kW, 950 kWh Lithium-ion Tesla battery, and a legacy 1.0 MW backup diesel generator. The microgrid powers numerous building and facility loads, including heating, ventilation and cooling; lighting; water and wastewater systems; communications; food production and storage; and transportation. The BLR green commuter program and electric vehicle infrastructure for the tribal government fleet are supported by the microgrid. The BLR campus has also been certified to serve as an American Red Cross emergency shelter. The microgrid can maintain stable electricity for the shelter during extreme natural events such as an earthquake, tsunami, flood or wildfire. During an extended grid outage, the Rancheria can designate and shed non-critical energy loads as needed. By coupling renewable generation with battery storage, the BLR microgrid achieves significant reductions in both utility cost and greenhouse gas emissions. The microgrid is now saving the Blue Lake Rancheria $250,000 annually and has allowed the Rancheria to increase tribal employment by 10% with new clean energy jobs. The Blue Lake Rancheria microgrid was developed through funding from the California Energy Commission’s EPIC program. Major partners on this project included Pacific Gas & Electric, Siemens, Tesla Energy, Idaho National Laboratory, GHD Inc., Colburn Electric, REC Solar, and Kernen Construction. • For more information about the Blue Lake Rancheria microgrid and upcoming projects at the Schatz Energy Research Center, call (707) 822-4345 or email [email protected]. • For more information about Blue Lake Rancheria’s sustainability and green energy initiatives, please email [email protected].
threshold of illuminance visual threshold, <in point vision> smallest illuminance (point brilliance), produced at the eye of an observer by a light source seen in point vision, which renders the source perceptible against a background of given luminance, where the illuminance is considered on a surface element that is normal to the incident rays at the eye Note 1 to entry: For visual signalling, the light source has to be rendered recognizable, and hence a higher threshold of illuminance is to be expected. Note 2 to entry: This entry was numbered 845-11-26 in IEC 60050-845:1987. Note 3 to entry: This entry was numbered 17-1313 in CIE S 017:2011.
Please enter your username below and press the send button.A password reset link will be sent to you. If you are unable to access the email address originally associated with your Delicious account, we recommend creating a new account. This link recently saved by vancestevens on August 13, 2009 1. Identify the International Phonetic Alphabet symbols and the sounds they represent. 2. Get familiar with some stress patterns in the English language. 3. Get acquainted with some strategies to practice rhythm and intonation in conversations so you can practice on your own. 4. Identify the nature of Thought Groups in English.
U.S. History--Meet the People Wife of John Adams, fought for women's rights and voice as early as the Revolution U.S. statesmen, 16th president. Led Union to victory in Civil War. Assassinated. 7th president of the US; successfully defended New Orleans from the British in 1815; expanded the power of the presidency; Indian Removal Act American patriot, writer, printer, and inventor. During the Revolutionary War he persuaded the French to help the colonists. Colonial man of African descent who was killed in Boston Massacre invented the mechanical reaper invented the cotton gin and interchangeable parts Elizabeth Cady Stanton United States suffragist and feminist; Co-founded the 1848 Women's Rights Convention held in Seneca Falls, New York United States abolitionist who escaped from slavery and became an influential writer and lecturer in the North Military commander of the American Revolution. He was the first elected president of the United States (1789-1799). (p. 581) Former slave who helped slaves escape on the Underground Railroad James K. Polk President of United States who had territorial aspirations, leading to conflict with Mexico, resulting in a war. President during the War of 1812; Father of the US Constitution 5th President of the U.S. 1817-1825 acquired Florida from Spain; declared Monroe Doctrine to keep foreign powers out. founder of Georgia as a colony for debtors Massachusetts patriot; 2nd President invented the steel plow American revolutionary patriot who was president of the Continental Congress Chief Justice of the Supreme Court establish judicial review John Paul Jones American naval commander in the American Revolution (1747-1792) said " I have not yet begun to fight." leader at Jamestown, Virginia; "If you don't work, you don't eat." King George III King of England during the American Revolution Marquis de Lafayette French soldier who served under George Washington in the American Revolution Virginian patriot; said "Give me liberty or give me death." daughter of Powhatan, acted as an intermediary between settlers and Indians inventor of the steam boat that could sail against current and wind Massachusetts patriot; member of Sons of Liberty; leader of Boston Tea Party Samuel F.B. Morse inventor who patented the telegraph brought over from Britain the idea of a textile factory and one building for all processes former slave who became an abolitionist and women's rights activist; "Ain't I a Woman?" speech author of the Declaration of Independence; President at the time of the Louisiana Purchase author of the pamphlet "Common Sense" The founder of the Quaker colony, Pennsylvania William Tecumseh Sherman Union General who destroyed South during "march to the sea" from Atlanta to Savannah, example of total war