content
stringlengths
275
370k
US researchers have reported that the saline content of the ocean surface provides a far more reliable indicator of whether or not heavy summer rains will fall in the US Midwest compared to sea surface temperatures. Seasonal weather in the Midwest is atmospherically connected to the Pacific and Atlantic Oceans, with wind patterns delivering heat and moisture to the region at different times in the year. The report states that scientists have long relied on sea-surface temperatures to predict how those wind patterns will behave and what the weather will be thousands of kilometers away from the ocean. Sea surface temperature however is highly variable, especially as oceans warm, so rainfall predictions based on temperature can be hit-or-miss. Adding data about surface salinity improves predictions for overall seasonal rainfall by telling meteorologists how much water has been supplied to the atmosphere. When water evaporates from the ocean, it leaves its salt behind, so the surface ocean gets a little bit saltier. That evaporation is driven by large-scale atmospheric patterns that are connected to weather over the continental USA, so saltiness works as an indicator for the amount of moisture carried by the atmosphere and patterns in where it will rain out. “For these seasonal weather patterns, we look at the oceans because they store far more heat and water than the atmosphere,” and therefore drive large-scale atmospheric patterns, explained Raymond Schmitt, oceanographer at Woods Hole Oceanographic Institute and study co-author. Remote sensing and floating sensors provide frequently updated salinity data for vast swaths of the oceans, allowing Schmitt and colleagues to draw connections between salinity and weather patterns. Laifang Li, a climate scientist at Pennsylvania State University and lead author, said, “It turns out for extreme precipitation years, patterns of sea surface temperature are just really diverse, with different patterns every year. Salinity is more reliable.” The new study builds on previous work on salinity-based predictions Schmitt has conducted with Li. Schmitt is already working with agricultural suppliers to use the salinity-based predictions, which he says they have found helpful for advising farmers and an improvement on other predictors. Sensing summer rains The study specifically improves predictions of timing and magnitude for summer extreme rainfall events in the Midwest, which are more difficult to accurately predict than overall rainfall for a season. Li used Midwest rainfall data from 1948 to 2019 and compared it to sea surface temperature and salinity records in different parts of the Pacific and Atlantic oceans across the same time period. The team found that by incorporating springtime sea surface salinity in the tropical Pacific and subtropical North Atlantic into predictive models, their predictions for summertime extreme rain events were 92% more accurate than sea surface temperature-based predictions alone for the Midwest. In practical terms, their models using salinity captured twice as much variability in the amount of extreme rains as models using sea surface temperature. To view the study Skillful Long-lead Prediction of Summertime Heavy Rainfall in the US Midwest from Sea Surface Salinity published in AGU’s Geophysical Research Letters, click here.
Introduction to Signposting Language Have you ever felt lost while reading a long article or listening to a lengthy presentation? Like you're wandering aimlessly through a maze of words without any sign of direction? This is exactly what signposting language aims to resolve. Definition of Signposting Language Signposting language is a collection of phrases and words used in speech and writing to guide the listener or reader through the content. These linguistic markers serve as "signposts," helping to signal the structure of the discussion, highlight important points, connect ideas, and provide cues about what is coming next. Just like the way physical signposts guide a traveler along their journey by indicating directions and distances, signposting language assists the audience in navigating the journey of an argument, a story, or any piece of content. These can include phrases that introduce a new idea ("firstly," "to begin with"), signal a contrast ("however," "on the other hand"), or indicate a conclusion ("finally," "in conclusion"). By clearly signalling the structure of your content and the relationships between different parts, signposting language helps to enhance the clarity and coherence of your message, making it easier for your audience to follow along and understand your points. Types of Signposting Language Signposting language is a specific type of language that writers and speakers use to guide their audience through their content. Just like road signs guide you on a journey, signposts in language guide the reader or listener through the content, making the narrative or argument easier to follow. Importance of Signposting Language in Communication Signposting language isn't just a fancy linguistic tool; it's a vital component of effective communication. Here's why: Signposts enhance clarity by helping to highlight key points and ideas, making the content more comprehensible and coherent for the reader or listener. Guiding the Audience Signposts also guide the audience, allowing them to understand the flow of the content and how various points are connected. Examples of Signposting Language These include words or phrases that indicate time or sequence, such as "firstly," "then," "finally," etc. These signposts emphasize the importance of a point. Phrases like "most importantly," "significantly," or "notably" are commonly used. These signposts compare or contrast ideas, using phrases like "on the other hand," "similarly," or "in contrast." How to Use Signposting Language Effectively Consistency is key. Stick to a particular set of signposting phrases throughout your content to avoid confusion. Keep It Simple The goal of signposting is to make things clearer, not more complicated. So, use simple and familiar phrases that your audience can easily understand. Final Thoughts on Signposting Language Signposting language, when used correctly, can dramatically improve the clarity and flow of your content, making it much easier for your audience to follow and understand your points. Remember to be consistent and keep it simple! Frequently Asked Questions What is signposting language? Signposting language refers to words or phrases that guide the reader or listener through content, making it easier to follow. Why is signposting language important? It enhances clarity and guides the audience, making the content more comprehensible and coherent. What are some examples of signposting language? Examples include temporal signposts ("firstly," "then," "finally"), emphatic signposts ("most importantly," "significantly"), and comparative signposts ("on the other hand," "similarly"). How can I use signposting language effectively? The key to effective signposting is consistency and simplicity. Stick to a particular set of signposting phrases and use simple, familiar words. Can signposting language be used in both written and spoken content? Yes, signposting language is beneficial in both written and spoken communication.
What is ROS orientation? The way that ROS defines orientation is with a mathematical concept called a quaternion. They’re a good way to represent orientation as they’re less ambiguous than roll, pitch, and yaw. But, they have the drawback of being a little difficult to understand. What is quaternion XYZW? A quaternion is a set of 4 numbers, [x y z w], which represents rotations the following way: // RotationAngle is in radians x = RotationAxis. x * sin(RotationAngle / 2) y = RotationAxis. y * sin(RotationAngle / 2) z = RotationAxis. How do you rotate quaternion? For rotation quaternions, the inverse equals the conjugate. So for rotation quaternions, q−1 = q* = ( q0, −q1, −q2, −q3 ). Inverting or conjugating a rotation quaternion has the effect of reversing the axis of rotation, which modifies it to rotate in the opposite direction from the original. What quaternion means? A quaternion represents two things. It has an x, y, and z component, which represents the axis about which a rotation will occur. It also has a w component, which represents the amount of rotation which will occur about this axis. In short, a vector, and a float. How do you get yaw from quaternion? Having given a Quaternion q, you can calculate roll, pitch and yaw like this: var yaw = atan2(2.0*(q.y*q.z + q.w*q.x), q.w*q.w – q.x*q.x – q.y*q.y + q.z*q.z); var pitch = asin(-2.0*(q.x*q.z – q.w*q.y)); var roll = atan2(2.0*(q.x*q.y + q.w*q.z), q.w*q.w + q.x*q.x – q.y*q.y – q.z*q.z); How much is a quaternion of soldiers? a group or set of four persons or things. Why are quaternions used? Quaternions are vital for the control systems that guide aircraft and rockets. Instead of representing a change of orientation by three separate rotations, quaternions use just one rotation. This saves time and storage and also solves the problem of gimbal lock. How do you rotate a point with quaternions? Rotate Point Using Quaternion Vector For convenient visualization, define the point on the x-y plane. Create a quaternion vector specifying two separate rotations, one to rotate the point 45 and another to rotate the point -90 degrees about the z-axis. Use rotatepoint to perform the rotation. Plot the rotated points. Are quaternions real? In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, and therefore also a domain. In fact, it was the first noncommutative division algebra to be discovered. Why do we rotate quaternions? Quaternions are very efficient for analyzing situations where rotations in R3 are involved. A quaternion is a 4-tuple, which is a more concise representation than a rotation matrix. Its geo- metric meaning is also more obvious as the rotation axis and angle can be trivially recovered.
Kilauea is one of the most active volcanoes on the Island of Hawaii. The shield volcano is located in the southeastern part of Mauna Loa and is the main attraction of the Hawaii Volcano National Park. Kilauea is one of the five most active volcanoes in the Hawaiian Island that form the Big Island of Hawai’i. The volcano rises 1,247 meters above sea level and is estimated to be 210,000-280,000 years old. It is among the world’s most active volcanoes and may even rank as the most active volcano. The volcano has erupted several times, with the most recent eruption beginning on September 29, 2021, and still ongoing. Kilauea is the Hawaiian Islands’ southernmost volcano and was, for years, considered part of the neighboring Mauna Loa because it lacked topographical prominence. Also, its eruption activities were similar to those of its neighbor. However, it is a separate volcano and the Hawai’i hotspot’s second-youngest product. The word “Kilauea” was initially used by the Hawaiians to refer to the summit caldera. However, the name now refers to the entire volcano. Kilauea is an elongated shield-like volcano with a large summit caldera and has two rift zones. The rift zone on the east is the longest, extending for about 125 kilometers. The west rift zone is only 35 kilometers long. The summit caldera is nearly 5 kilometers long, 3.2 kilometers wide, and has an area of about 10 square kilometers. Kilauea has a topographic prominence of 15 meters and its north and west slopes merge with those of Mauna Loa volcano. Geology Of Kilauea Kilauea, just like the other Hawaiian volcanoes, was formed when the Pacific Plate moved over the Hawai’i hotspot, a process that also formed the 6,000-km-long Hawaiian-Emperor seamount chain. The volcano is currently the undersea mountain range’s eruptive center and the second-youngest volcano (only older than Loihi Seamount). Kilauea is one of the five most active volcanoes constituting Hawai’i Island. Initially, Kilauea was a submarine volcano and progressively erupted and emerged from the sea approximately 50,000-100,000 years ago. However, the volcano is 300,000-600,000 years old and has erupted frequently in the last 300 years. Although the volcano rises 1,247 meters above sea level, the eruption activities will likely make it taller. The southern flank has an active fault known as the Hilina fault system that slips vertically at about 2-20 millimeters per year. Eruptive History Of Kilauea Kilauea has erupted severally over the last 200-300 years, with the oldest recovered lava flow dating back at least 275,000 years. The recovered lava exhibits eruptive episodes characteristic of a seamount that was still submerged in the ocean. The rock samples, drilled from Kilauea, suggest that the volcano may have emerged from the sea about 50,000 years ago. Kilauea’s first recorded eruption dates back to the 19th century, with the first well-recorded eruption occurring in 1823. However, oral records obtained from the Native Hawaiians suggest the volcano may have erupted in 1790, killing part of the Keoua Kuahu’ula warriors. The volcano has erupted 61 times since 1823, making it one of the world’s most active volcanoes. However, the lava volume and the eruptions’ origin and length have varied with each activity. Some eruptions lasted a few days, while others lasted for years, with lava flowing from different sites. But, more than half of the eruptions have occurred near or at the summit caldera. Kilauea’s longest eruption began on January 3, 1983, and lasted until September 2018, making it the Earth’s 12th longest volcanic eruption since 1750. Although the volcano began erupting on a vent on the east rift zone, the activity shifted several times during the eruption period. By December 31, 2016, the rift zone on the east had produced up to 4.4 cubic kilometers of lava that covered 144 square kilometers. The 2018 eruptions caused earthquakes and destroyed over 87 houses in Leilani Estate and the surrounding areas. The recent eruption began on September 29, 2021. Kilauea In Hawaiian Mythology Kilauea and the other four Hawaiian volcanoes are considered sacred mountains by the Native Hawaiians. In Hawaiian mythology, the deity Pele (goddess of volcanoes, live in Kilauea. It is also on this volcano that Pele and Kamapua’a (rain god) fought each other. The Halemaumau vent on the summit caldera derives its name from conflict between the two goddesses. Kamapua’a, realizing that the lava spout produced by Pele could harm him, covered the vent with amaumau fern, hence the name “Halemaumau.” Eventually, other gods separated the two deities since their conflict was becoming a threat to the others.
13.8 billion years isn’t close to enough time, but if we wait long enough, even our Sun will become one. The Big Bang happened approximately 13.8 billion years ago, and it might have only taken 50–100 million years to form the very first stars. Ever since then, the Universe has been flooded with starlight. When enough matter — mostly hydrogen and helium gas — gravitates together into a single, compact object, nuclear fusion must take place inside the core, giving rise to a true star. But as time goes on and fusion continues, eventually that star will run out of fuel. Sometimes, the star is massive enough that additional fusion reactions will take place, but at some point, it all must stop. Even when a star finally dies, however, their remnants will continue to shine. In fact, except for black holes, every remnant ever created still shines today. Here’s the story of how long we’ll need to wait for the first star to truly go dark. It all begins from clouds of gas. When a cloud of molecular gas collapses under its own gravity, there are always a few regions that start off just a little bit denser than others. Every location with matter in it does its best to attract more and more matter towards it, but these overdense regions attract matter more efficiently than all the others. Because gravitational collapse is a runaway process, the more matter you attract to your vicinity, the faster additional matter will flow inward. While it can take millions to tens of millions of years for a molecular cloud to go from a large, diffuse state to a relatively collapsed one, the process of going from a collapsed state of dense gas to a new cluster of stars — where the densest regions ignite fusion in their cores — takes only a few hundred thousand years. Stars come in a huge variety of colors, brightnesses and masses, and a star’s life cycle and fate are determined from the moment of the star’s birth. When you create a new cluster of stars, the easiest ones to notice are the brightest ones, which also happen to be the most massive. These are the brightest, bluest, hottest stars in existence, with up to hundreds of times the mass of our Sun and with millions of times the luminosity. But despite the fact that the brightest ones are the stars that appear the most spectacular, these are also the rarest stars, making up far less than 1% of all the known, total stars. They are also the shortest-lived stars, as they burn through all the nuclear fuel (in all the various stages) in their cores in as little as 1–2 million years. When these stars, the brightest and most massive ones of all, run out of fuel, they die in a spectacular type II supernova explosion. When this occurs, the inner core implodes, collapsing all the way down to a neutron star (for the low-mass cores) or even to a black hole (for the high-mass cores), while expelling the outer layers back into the interstellar medium. Once there, these enriched gases will contribute to future generations of stars, providing them with the heavy elements necessary to create rocky planets, organic molecules, and in rare, wonderful cases, life. It is estimated that at least six prior generations of stars contributed to the molecular gas cloud that eventually gave rise to our Sun and Solar System. If you form a black hole from the collapse of a supermassive star, you don’t have to wait very long for it to go dark. In fact, by definition, black holes go almost perfectly “black” immediately. Once the core collapses sufficiently to form an event horizon, everything inside collapses down to a singularity in a fraction of a second. Any remnant heat, light, temperature, or energy in any form in the core simply gets added to the mass of the singularity. No light will ever emanate from it again, except in the form of Hawking radiation, which is emitted when the black hole decays, and in the accretion disk surrounding the black hole, which is constantly fed and refueled from the surrounding matter. But not every massive star forms a black hole, and the ones that form neutron stars tell a vastly different story. A neutron star takes all the energy in a star’s core and collapses incredibly rapidly. When you take anything and compress it quickly, you cause the temperature within it to rise: this is how a piston works in a diesel engine. Well, collapsing from a stellar core all the way down to a neutron star is maybe the ultimate example of rapid compression. In the span of seconds-to-minutes, a core of iron, nickel, cobalt, silicon and sulfur many hundreds-of-thousands of miles (kilometers) in diameter has collapsed down to a ball just around 10 miles (16 km) in size or smaller. Its density has increased by around a factor of a quadrillion (10¹⁵), and its temperature has grown tremendously: to some 10¹² K in the core and all the way up to around 10⁶ K at the surface. And herein lies the problem. You have all this energy stored within a collapsed star like this, and its surface is so tremendously hot that it not only glows bluish-white in the visible portion of the spectrum, but most of the energy isn’t visible or even ultraviolet: it’s X-ray energy! There is an insanely large amount of energy stored within this object, but the only way it can release it out into the Universe is through its surface, and its surface area is very small. The big question, of course, is how long will it take a neutron star to cool? The answer depends on a piece of physics that practically isn’t well-understood for neutron stars: neutrino cooling! You see, while photons (radiation) are soundly trapped by the normal, baryonic matter, neutrinos, when generated, can pass right through the entire neutron star unimpeded. On the fast end, neutron stars might cool down, out of the visible portion of the spectrum, after as little as 10¹⁶ years, or “only” a million times the age of the Universe. But if things are slower, it might take 10²⁰-to-10²² years, which means you’ll be waiting for some time. But other stars will go dark much more quickly. You see, the vast majority of stars — the other 99+% — don’t go supernova, but rather, at the end of their lives, contract (slowly) down into a white dwarf star. The “slow” timescale is only slow compared to a supernova: it takes tens-to-hundreds of thousands of years rather than mere seconds-to-minutes, but that’s still fast enough to trap almost all the heat from the star’s core inside. The big difference is that instead of trapping it inside of a sphere with a diameter of only 10 miles or so, the heat is trapped in an object “only” about the size of Earth, or around a thousand times larger than a neutron star. This means that while the temperatures of these white dwarfs can be very high — over 20,000 K, or more than three times hotter than our Sun — they cool down much faster than neutron stars. Neutrino escape is negligible in white dwarfs, meaning that radiation through the surface is the only effect that matters. When we calculate how quickly heat can escape by radiating away, it leads to a cooling timescale for a white dwarf (like the kind the Sun will produce) of around 10¹⁴-to-10¹⁵ years. And that will get your stellar remnant all the way down to just a few degrees above absolute zero! This means that after around 10 trillion years, or “only” around 1,000 times the present age of the Universe, the surface of a white dwarf will have dropped in temperature so that it’s out of the visible light regime. When this much time has passed, the Universe will possess a brand new type of object: a black dwarf star. I’m sorry to disappoint you, but there aren’t any black dwarfs around today. The Universe is simply far too young for it. In fact, the coolest white dwarfs have, to the best of our estimates, lost less than 0.2% of their total heat since the very first ones were created in this Universe. For a white dwarf created at 20,000 K, that means its temperature is still at least 19,960 K, telling us we’ve got a terribly long way to go, if we’re waiting for a true dark star. We currently conceive of our Universe as littered with stars, which cluster together into galaxies, which are separated by vast distances. But by time the first black dwarf comes to be, our local group will have merged into a single galaxy (Milkdromeda), most of the stars that will ever live will have long since burned out, with the surviving ones being exclusively the lowest-mass, reddest and dimmest stars of all. And beyond that? Only darkness, as dark energy will have long since pushed away all the other galaxies, making them unreachable and practically unmeasurable by any physical means. And yet, amidst it all, a new type of object will come to be for the very first time. Even though we’ll never see or experience one, we know enough of nature to know not only that they’ll exist, but how and when they’ll come to be. And that, in itself — the ability to predict the far-distant future that has not yet come to pass — is one of the most amazing parts of science of all! Ethan Siegel is the author of Beyond the Galaxy and Treknology. You can pre-order his third book, currently in development: the Encyclopaedia Cosmologica.
Anarchopedia:en:point of view Point of view, often abbreviated POV, is a subjective stance or perspective on a given topic. In linguistics, it is the 'position', in some sense, of the 'subject' of a sentence. For instance, to say "the dog is green" is to say that someone has observed something, identified it as "the dog," whichever dog that is, and compared it to the memory of the spectrum "green", and decided it is close enough to "the same" to use the word "is" to describe the relationship. All of these decisions are part of the point of view. Usually point of view is described as: - First person, i.e. "Seeing that the dog is green, I decide to wash it." - Second person, i.e. "You say the dog is green? Can I believe that?" - Third person, i.e. "Joey and Tom agreed that the dog is green." A neutral point of view involves trying to assign all statements to a third person authority, i.e. "A says B about C." The Wikipedia tries to employ this point of view. But it does not solve all problems. It requires one to appeal often to credentialism and perhaps professionalism, e.g. "Professor A said, on the record, B about the scientific view of C". Without a vast array of agreements that constitute a systemic bias of its own, there is no real way to adhere to this 'neutral' view, and many simply disregard it. The Meta-Wikipedia often debates the problems arising from this strategy. Natural point of view, as in the idea of natural law, is often the result of choosing a particular science, e.g. particle physics or ecology, or even economics as expressed in biology ("food chains" etc.) and deciding that all of reality can be evaluated from it. Buddhism and Taoism idealize the approach to such a point of view, but admit it is hard or impossible to achieve, and definitely impossible to reliably communicate to a human being. Accordingly claiming this point of view can be a power grab, e.g. the many claims to have found a Biological Basis of Morality. Multiple point of view is the compromise, but necessitates what is called (often disdainfully) "politics as usual": the division of participants into factions if only to agree on vocabulary and etiquette and at least some simple view of ethics and morals, if not the formal method for evaluating and quantifying ethicality and morality of human actions long sought in vain by philosophers and theologians. In addition to these 'spatial' variations, there are also 'temporal' or 'tense' variations in point of view. In English there is a past tense ("saw the dog"), present tense ("see the dog"), future perfect ("will see the dog") and future imperfect ("might see the dog", "could see the dog", "may see the dog"). In French there are separate tenses for backfilling facts incidental to the action (e.g. Bush adminstration claims to care about finding any weapons of mass destruction in Iraq) and for actually advancing the action (e.g. Bush administration desire to discover any new dangerous technology they think may be more advanced than their own). In Japanese there is sensitivity to the status of the speaker and listener. These differences in linguistic tense imply a point of view, suspectibility or immunity to certain propaganda techniques, e.g. one might expect the Japanese to fall more often for credentialism or at least not challenge views presented with credentials, while one might expect the French not to accept, e.g. the "logic of war" when presented as obvious rationalization for an already-made decision. Alfred Korcybski in his General Semantics theory pointed out that the verb "to be" hides a great many divergences in point of view, and that the terms becomes, remains and equals were far more exact and placed one more exactly in a temporal frame. One was less likely to make easy-to-abuse claims for-all-time, i.e. a dogma. In literature, the point of view, or viewpoint (see perspective for the more general and visual sense of this term), expresses the related experience of the narrator - not that of the author. Authors expressly cannot, in fiction, insert or inject their own voice, as this challenges the suspension of disbelief. Texts encourage the reader to identify with the narrator, not with the author. Literary narration can occur from the first-person, second-person or third-person point of view. In a novel, first-person commonly appears: I saw ... We did.... In self-help or business writing, the second person (addressing "you") predominates: you must..., thou shalt.... In an encyclopedia or textbook narrators often work in the third-person (that happened..., the king died.... For additional vagueness, imprecision and detachment, some writers employ the passive voice (it is said that the president was compelled to be heard.... The ability to use viewpoint effectively provides one measure of someone's writing ability. The writing markschemes used for National Curriculum assessments in England reflect this: they encourage the awarding of marks for the use of viewpoint as part of a wider judgement regarding the composition and effect of the text. usage in wikis When the abbreviation "POV" is used in wiki rhetoric when talking about an article, it usually means that the article has perceived bias. That is, the author has inserted what is overtly their own view, rather than citing authorities or evidence as a neutral point of view would advise. This however is itself a subjective determination, and the serious problems that arise when it is enforced by a small clique have become obvious on Wikipedia and other large public wikis. Among other things, there is no consistent standard of evidence required of popular vs. unpopular views. The other conceptions of point of view above are also sometimes mentioned, along with culture biases such as the UKPOV or USPOV or EPOV, implying a UK-centric, US-centric, or English-speaking-world-centric view, respectively. For recommendations on avoiding a personal point of view when editing pages, see how to edit pages. For recommendations on avoiding political or other biaes, see political dispute, terminology dispute, identity dispute.
Borderline personality disorder is a disturbance of certain brain functions that causes four groups or domains of behavioral disturbances: - poorly regulated and excessive emotional responses; - harmful impulsive actions; - distorted perceptions and impaired reasoning; and - markedly disturbed relationships. The symptoms of borderline disorder were first described in the medical literature over 3000 years ago. Because of its high prevalence in the general population (6%) and the severity of its symptoms (e.g., cutting, inappropriate anger outbursts and other emotional dyscontrol, consistent negative perceptions of other people’s behaviors and difficulty in establishing normal, balanced relationships), the disorder has gained increasing visibility over the past four decades. The full spectrum of symptoms of borderline disorder typically first appear in the early teenage years and into the twenties. Although some children with significant behavioral disturbances may develop readily diagnosable borderline disorder as they get older, it is very difficult to make the diagnosis in children. After its onset, episodes of symptoms usually increase in frequency and severity. Remissions, relapses, but overall significant improvement with treatment is the most common course of the illness.8 Borderline disorder appears to be caused by the interaction of biological (genetic) and environmental risk factors, such as poor parental nurturing, and early and sustained emotional, physical or sexual abuse. Physical disorders, such as migraine headaches, and other mental disorders, such as depression, anxiety, panic and substance abuse disorders and ADHD, occur much more often in people with borderline disorder than they do in the general population.
Gender equality is the belief that everyone should have equal opportunities and rights regardless of their gender. This encompasses a wide range of issues such as equal pay, equal access to education and healthcare, and equal representation in politics and the workplace. Gender equality is a fundamental human right and a cornerstone of a just and equitable society. Despite advances in gender equality over the past few decades, many challenges still remain. In many countries, women are still paid less than men for the same work, and they are underrepresented in positions of power and leadership. This disparity is even greater for women from marginalized communities, such as those from ethnic or racial minorities, those with disabilities, or those who identify as LGBTQ+. In order to achieve gender equality, there are several steps that need to be taken. Firstly, there needs to be a shift in societal attitudes and beliefs about gender roles and expectations. This includes challenging harmful stereotypes and promoting positive, equitable representations of women and men in the media. Secondly, policies and laws need to be put in place to address gender inequality in the workplace and in education. This includes measures such as equal pay for equal work, affirmative action programs, and family-friendly policies that support working parents. Thirdly, more needs to be done to address violence against women and girls, which is a major barrier to gender equality. This includes increasing access to services for survivors, strengthening laws and policies to hold perpetrators accountable, and engaging men and boys as allies in the movement for gender equality. Finally, it is important to recognize the intersectionality of different forms of oppression and to work towards creating a society that is inclusive and equitable for all. This means addressing the unique challenges faced by women from marginalized communities and ensuring that their voices are heard and their rights are respected. In conclusion, gender equality is a complex and multifaceted issue that requires the efforts of individuals, communities, and governments to address. While progress has been made, there is still much work to be done to ensure that everyone, regardless of their gender, has the opportunity to live a life free from discrimination and oppression. Comments are closed.
- Learning to think: encouraging students to be better and more flexible thinkers. - Thinking to learn: there is no learning without thinking so we must invest our minds in what we are learning. - Thinking about thinking: using metacognition, encouraging students to think about their learning. - Thinking together: the need to collaborate to solve complex problems. - Thinking big and long range: some of our students will still be alive in the 22nd century! Art and Graham explained why we need a mind shift in thinking: we need to shift from knowing the right answers, which were the skills and knowledge that were useful to build an economy, to knowing how to go beyond knowing and what to do when the answers are not immediately apparent. They said "Students must not just be prepared for a life of tests, but also for the tests of life." Learning to Think Effective thinking requires us to consider the content of what we are teaching and the concepts. You cannot separate thinking from conceptual knowledge. We also need to think bout the type of thinking skills and habits of mind we want students to develop. When considering the content and concepts they asked us to consider standards, essential questions, prior knowledge, understanding we want students to gain (and how we can know that they understand - for example can they apply, connect explain, demonstrate, interpret, empathize, ask more complex questions?) Students need to learn to think through the content: powerful critical thinking and original creative thinking are the most challenging types of thinking. Students often need direct instruction in thinking skills - we can't just assume they know how to analyze or evaluate or draw conclusions. Art and Graham said that in order for students to know how to use a particular thinking skill they first have to be able to recognize that skill and be able to use it when they describe the steps they are taking when making decisions and solving problems. For example we can model the language we want them to use. Instead of asking "what do you would would happen if ..." we can ask them "what do you speculate might happen ...". Instead of asking "what did you think of this story", we can ask students "what conclusion might you draw ...." Instead of asking "how can you explain ..." we can say "how does your hypothesis explain ..." We need to give students rich cognitive tasks that demand skillful thinking. We also need to give them a framework so that they can independently break down these complex tasks when solving the problems. Lauren B. Resnick said "one's intelligence is the sum of one's habits of mind." Habits of mind involve thinking flexibly and coming up with different options and solutions to problems by applying previous knowledge. Art and Graham talked about 16 habits of mind that we can use when confronted with problems - for example persisting, listening, risk taking, gathering a lot of data before making a decision, communicating with clarity and precision. These are transdisciplinary and are the skills that adults need as well s students as they focus on long-range, enduring learning. Students should encounter these habits of mind repeatedly as they move from class to class so that they learn to apply these skills spontaneously when solving problems. Thinking to Learn Art and Graham quoted from Martin Heidegger: "Learning is an engagement of the mind that transforms the mind." They said we don't "get" ideas, we "make" ideas. Students become more intelligent if we treat them as if they are already skillful problem solvers and ask them questions that challenge their thinking. Thinking about Thinking Metacognition is the awareness and understanding of our own thought processes. To encourage students to think about their thinking we can model this as teachers by thinking aloud to solve problems - verbalizing what is going on inside their heads is a great strategy for metacognition. They said: "if you don't understand the process that produced the answer, you cannot reproduce the answer." Therefore students should spend more time describing the strategies they have used to solve problems. Even though at schools we often value independent thinking, students know they benefit from working together. Giving them opportunities to do this enlarges their conception from me to we. Thinking Big and Long-Range Art and Graham said "students will need to solve problems that haven't yet been created, using technologies that haven't yet been invented." There are many eternal questions that need to be asked: what is fair and ethical, why is something good, what is truth, how might we unite and not divide. Students will need to consider how to solve world problems in peaceful ways rather than resorting to violence. They will need to be flexible thinkers and look for different alternatives when the people they are working with will be likely to have a range of beliefs, world views and cultures. They need to be conscious of how what we do affects others - both those we can see and those on the other side of the world. They will need to think interdependently with a wide global community and how to use and distribute the world's resources. Finally Art and Graham ended with a quote from Alan Kay: "The best way to predict the future is to invent it". They talked about the fact that if we want a future that is more collaborative then we must invent it, be cause the future is in our classrooms today. Photo Credit: Thinker by Eileen Delhi
Talking about bullying directly is an important step in understanding how the issue might be affecting kids. There are no “right or wrong” answers to these questions, but it is important to encourage kids to answer them honestly. Assure kids that they are not alone in addressing any problems that arise. Start conversations about bullying with questions like these: - What does “bullying” mean to you? - Describe what kids who bully are like. Why do you think people bully? - Who are the adults you trust most when it comes to things like bullying? - Have you ever felt scared to go to school because you were afraid of bullying? What ways have you tried to change it? - What do you think parents can do to help stop bullying? - Have you or your friends left other kids out on purpose? Do you think that was bullying? Why or why not? - What do you usually do when you see bullying going on? - Do you ever see kids at your school being bullied by other kids? How does it make you feel? - Have you ever tried to help someone who is being bullied? What happened? What would you do if it happens again? CDC Definition of Bullying: Any unwanted aggressive behavior(s) by another youth or group of youths, who are not siblings or current dating partners, that involves an observed or perceived power imbalance and is repeated multiple times or is highly likely to be repeated. Bullying may inflict harm or distress on the targeted youth including physical, psychological, social, or educational harm. Tips for working with your child’s school if he or she is being bullied: - Keep a written record of all bullying incidents that your child reports to you. Record the names of the children involved, where and when the bullying occurred, and what happened. - Immediately ask to meet with your child’s classroom teacher and school administrator explaining your concerns in a friendly, non-confrontational way. - Ask the teacher about his or her observations: – Has he or she noticed or suspected bullying? – How is your child getting along with others in class? – Has he or she noticed that your child is being isolated, excluded from playground or other activities with students? – Ask the teacher and administrator what he or she intends to do to prevent bullying from occurring in the future. - If you are concerned about how your child is coping with the stress of being bullied, ask to speak with your child’s guidance counselor or other school-based mental health professionals. - Set up a follow-up appointment with the teacher and administrator to discuss progress. - Keep notes from your meetings with teachers and administrators. These and other materials are available online at: www.stopbullying.gov Are you considering counseling? Please reach out. Let’s work through this together.
Ear pain is very common, especially in children. It is mainly caused due to an ear infection, irritation or allergies, changes in air pressure, such as when you take off or land in a plane, an object in the ear, and even with dental problems. The major step in diagnosing an ear infection is to find the place of infection. An ear infection may take place in the outer ear, the middle ear, or sometimes both. The outer ear infection is known as otitis externa involving the ear canal and external ear and the infection in the middle ear is known as otitis media. An ear infection can be cured by using an otoscope, a small lighted device used to view the ear canal and eardrum. Though it is not a simple practice with general ENT doctors, an otologist will use a specialized microscope to check the ear with more comfort and in more detail.
What exactly is an IQ test? Your child’s school may have requested that you arrange for an IQ test to be done, possibly along with a fuller educational assessment – most often by an Educational Psychologist. We usually understand the reason for and content of tests to determine reading, maths and spelling ages, but many parents are unsure of the nature and reason for the ‘IQ’. The letters IQ stand for Intelligence Quotient. The test is the most common method to measure intelligence and its value lies mostly in its ability to predict a child’s chances of school success fairly accurately. Most school subjects require those mental skills and knowledge that are tested by the items comprising the IQ battery of subtests. IQ tests are based on statistics and the child’s results will reflect her academic abilities as compared to other children of the same age. The average IQ score is around 100 but a ‘normal’ or ‘average’ IQ can fall anywhere between the scores of 71 – 129, being further divided into ‘below average’ or ‘above average’ if nearing these scores respectively. The two extremes would be IQ scores of less than 70 and more than 130. Children falling into these extremes would account for roughly 2.5% each of the entire population of children. They are described as ‘special needs’ children because on the one hand, having an IQ score of under 70 may suggest that academic success will be difficult to achieve, while an IQ score of over 130 places one in the category of ‘gifted’, meaning that academic potential is extraordinarily high. Children in both these categories need special help with their widely differing educational, social and emotional needs Typically, IQ scores can be used to predict future scholastic levels although this type of prediction can be misleading. Many children cope with high levels of education due to determination and perseverance. However, for the sake of clarity, here are the widely accepted levels IQ over 110: The individual should be capable of a university or other tertiary education IQ between 90 – 110: Capable of completing secondary education and beyond IQ between 80 – 89: Capable of completing High School or a technical education IQ between 70 – 79: May have difficulties in High School IQ below 70: Needs special education IQ tests are conducted by trained psychologists who are qualified to administer these tests. Most IQ tests consist of two broad parts: a subtest which tests verbal skills such as vocabulary and general knowledge and a performance subtest which tests visual, motor and spatial skills. tests are believed to be unreliable below the age of six . Between the ages of 6 – 18, test results are fairly reliable but may fluctuate depending on environmental factors such as exposure to languages (especially the language used to test), learning opportunities and family support. The average child may have an IQ score that varies by up to 15 points during the ten years or so of schooling. This is why IQ scores should not be taken too seriously in spite of its importance as a predictor of later scholastic success. What IQ tests can’t measure IQ tests can’t determine success in life. Success depends on a combination of intelligence, social skills, endurance and even a healthy dose of chance or opportunity. An IQ represents only the chances of a child’s achieving in the academic sphere. It cannot predict or substitute for attitude, motivation and interest. In addition, it can’t test for specific talents such as musical or artistic potential, physical prowess, creative thinking, leadership and social skills. In short, it can be used as a helpful tool in understanding more about a child’s academic potential but falls far short in defining the essential nature of the child. We are all far, far more and less than our IQ scores may suggest! If a child has learning difficulties, an IQ score will almost certainly be required. The results cannot, however, fully explain the reasons for the challenges he or she experiences at school
Earth Day presents an exceptional opportunity for individuals, corporations and governments to unite and build a prosperous, greener energy economy for our world’s future. In 1970, the Earth Day Network was founded by U.S. Senator Gaylord Nelson, based on the principle that every person is entitled to a healthy sustainable environment, and to inspire worldwide awareness for our earth’s climate changes. Earth Day has been celebrated every year on April 22nd in approximately 190 countries, commemorating its 40th Anniversary in 2010. Although proactive steps toward protecting the environment are being taken, environmental damage is still occurring, with 70 million tons of carbon dioxide being released into the atmosphere every day. The Earth Day Network currently lists over 1,000 Earth Day events and volunteer opportunities in the United States alone. There is no reason not to get involved and participate; remember, small changes can go a long way! |What We Are Doing||What You Can Do To Help| The decline in the world’s forests—about half of the earth’s tropical forests have been destroyed—has a big impact on the environmental footprint, from the release of carbon dioxide to the loss of countless animal habitats. 1- Planting Trees: An Earth Day Tradition 1-Turn your backyard into a wildlife habitat: even the smallest urban garden can sustain the basics for local fauna. Plant a tree in your garden and implement sustainable gardening methods for an even greener space. Join a volunteer group that collects trashed wood from buildings and construction sites for reuse. |2- Recycling Contest for Kids| Children staying at the resort will be invited to participate in a recycling contest. The goal will be to create an object or toy from recyclable materials or from natural resources found in our nature park, such as dry leaves, stones or sand. The most creative piece will be awarded a prize. All of the children’ s creations will be showcased in our Kids’ Club during the entire week. |2- Recycle in a creative way. Use your imagination and turn any used household items into reusable craft materials: a plastic jug can be turned into a watering can; a cereal box covered-up with scrapbook paper can be turned into a magazine holder, and a milk carton can become a bird feeder. The sky’s the limit!| |3- “Save the Planet” Drawing Contest | Children will be invited to draw on a defined topic: “Save the Planet”. They will be able to use watercolors, crayons, color pencils, acrylic paint, etc. The drawings will be exhibited at the resort, and the winner will receive a prize. |3- Engage your children in fun projects. Get them acquainted with plants and animals by taking them to museums, planetariums, aquariums or zoos.| Drawing Earth Day themes is also a great idea and requires as little as crayons and paper, have them create posters that teach others about endangered species. |4- Community Cleanup| Resort staff will be urged to participate in cleaning surrounding areas outside of the resort. The activity will encompass a brief explanation about the significance of Earth Day, and the importance of being environmentally aware. The staff will be divided into groups will be assigned specific areas to clean. The cleanup will last one hour and will be photographed and documented with a list of participants, and with the amount of garbage collected. |4-Get involved with the community. Find a volunteering event in your area and make sure to bring your children along. Children are very energetic; allow them to dig in soil in a park or pick up trash in the playground.| Join us and become part of the continuing progress toward a healthier planet by celebrating Earth Day today – and every day. There are numerous ways you can help, join a project that will motivate you to adopt a green lifestyle beyond Earth Day.
If it is possible to use the appropriate measuring device, select the one that best meets the task. For example, to determine the value of the angledrawn on paper or other similar material, quite suitable protractor, for determining the angular directions of the terrain have to look for the geodetic theodolite. For measuring angles between adjacent planes of any volume of items or units of use protractors - there are many types, different device, method of measurement and accuracy. You can find more exotic instruments measure angles in degrees. If the possibility of measurement using the appropriate tool is missing, then use is known from school trigonometric relations between the lengths of sides and angle measures of the triangle. It will be enough possible to measure angular and linear dimensions - e.g., using rulers, tape measures, meter, pedometer, etc. With this start - measure from the top corner along two of its sides convenient distance, write down the values of these two sides of a triangle, and then measure the length of the third side (the distance between the endings of those parties). Select to calculate the value of the angle in degrees of one of the trigonometric functions. For example, you can use the theorem of cosines: the square of the length of the side lying opposite the measured angleequal to the sum of the squares of the other two sides minus twice the product of the lengths of these sides into the cosine of the desired angle (a2 = b2+c2-2*b*c*cos(α)). From this theorem, print the value of cosine: cos(α) = (b2+c2-a2)/(2*b*c). Trigonometric function, which restores the value of the cosine of the angle in degrees, called the arc cosine, this means that the formula in its final form should look like this: α = arccos((b2+c2-a2)/(2*b*c)). Substitute the measured dimensions of the sides of the triangle obtained in the previous step, the formula and perform the calculation. This can be done using any calculator, including those that offer various online services on the Internet.
Gregor Mendel’s principles of inheritance were based on his experiments with peas in the 1860s. 150+ years later, scientists have identified the gene for Mendel’s pea flower colour. We inherit traits like eye colour, hair colour and tongue-rolling ability from our parents and grandparents. We now know that inherited traits are the result of genes passed from generation to generation. Mendel and inheritance in peas Over 150 years ago, Gregor Mendel wanted to find out how traits were inherited. He spent 8 years on an extensive study cross-breeding pea plants and recording the traits of their. He looked at traits such as flower colour, flower position, plant height, seed colour, seed shape, pod colour and pod shape. In 1866, he published his findings. They were ignored by the science community at the time but later became the foundation of modern . Find out more about Mendel’s principles of inheritance Mendel found that inheritance involved factors that were passed from parents to offspring. He proposed 3 key principles, which are still used now to explain inheritance. In the early 1900s, these principles were refined as scientists discovered that chromosomes contained the genetic material responsible for passing traits from parents to offspring. Find out more about Mendel’s peas in the 21st century We now know that genes are the unit of inheritance between generations. Genes are made ofand are carried in chromosomes. Using techniques to isolate, copy and engineer genes, scientists are gathering more and more information about the genes in different organisms and how they function. In 2010, an international team of researchers, including scientists from Plant & Food Research in New Zealand, identified theresponsible for Mendel’s pea flower colour. The researchers identified the gene in both purple and white flowering pea plants and discovered how it affects petal colour. Read more about Mendel's work in the article Celebrating 150 years since Mendel's lectures, this dynamic panel discussion explores the Mendelian picture of genetics and a more up-to-date picture of gene-environment interactions. Listen to the fascinating 'Mendel's legacy'.
Arkansas Child Development and Early Learning Standards: Birth through 60 Months The first five years of life is a period of rapid and intense development. Research has found that during this time, children build critical foundational skills that profoundly influence their later health, ability to learn, social relationships, and overall success. High-quality early childhood environments— whether they be in a child’s home; in the care of a family member, friend, or neighbor; with a family child care provider, or in an early learning program—are critical to supporting child development and learning. A foundational aspect of a high-quality early learning environment is an early childhood professional’s clear understanding of child development and learning. With this knowledge, an early childhood professional knows where children are developmentally, can build on their skills to support new development and learning, and, when necessary, identify areas of potential developmental delay. Child development and early learning standards support awareness and knowledge of how children develop and learn. Standards create a common understanding of child development and learning and provide those who work with young children a guide to the progression that takes place over time across all of a child’s critical domains of development and learning. Given the important role standards play in promoting high-quality care, Arkansas has used the latest research in the early childhood field to create a new set of child development and early learning standards to support the state’s early childhood community.
Have you heard your 5 year old child saying ” I want to be an astronaut”? Have you ever wondered how to introduce children, at such a young age, to the world of space? Well, NASA STEM Engagement program has slew of activities to learn and actively be a part of NASA. NASA STEM Engagement strives to involve children from grade K-12 in understanding NASA and latest developments. For instance the High Flyers Alphabet Activity book for grade K-2 has been created to introduce several basic aeronautics terms for children in kindergarten through second grade. The A-Z activity and colouring book contains work puzzles, and enables children to learn new words about NASA’s Commercial Crew Program that will launch astronauts to space on American rockets. Here are the links below for printable activity books. Children can be introduced to aeronautic terms at a very early age with these activity sheets. There are other activities and information available online. Check out the link below as per the child’s grade: - High Flyers Alphabet Activity Book- Math and Language Literacy for K-2 grade - Commercial Crew A to Z Activity and Colouring Booklet Grade K-4 - Orion Printable A to Z Colouring Book (PDF) for Grades 3-10 - Orion A-Z printable book Grade 5-12 - Reading the ABCs from Space Grade 4-12 - NASA’s picture dictionary Grade 3-8 - Space Communications and Navigation (SCaN) A to Z Grade 7-12 (Printable trading cards) - Asteroid Alphabet for Grades 7-12 - Learn About Planets Outside Our Solar System for Grades 5-12
The people who settled along the Aleutian archipelago are often referred to as Aleut. This name was given to them by Russian fur traders, but most prefer to call themselves Unangax^, or coastal people. It is believed that the Unangax^ people migrated across the Bering land bridge from Asia between 12,000 and 15,000 years ago. The Unangax^ people lived underneath the earth in semi-subterranean houses called ulax and developed specialized skills to survive in the harsh climate. They hunted marine mammals from skin-covered kayaks, or iqyax. The Unangax^ subsisted for centuries and thrived as a culture until Russian fur traders discovered the Aleutian Islands around 1750. At the time, the Aleut population was estimated at 12,000 to 15,000. The fur traders from Russia occupied the islands and their people in their quest to obtain sea otters and fur seals. The population of Unangan was greatly reduced after Russian occupation due to disease, war and malnutrition. The Unangax^ also suffered tremendous loss during World War II when the U.S. government relocated most of the Aleutian Island residents to internment camps located in Southeast Alaska. Many Unangax^ died in these camps, further reducing their population. The U.S. government eventually passed a congressional act in 1988 called the Aleut Restitution Act. The purpose of this act was to pay restitution to the victims of WWII internment. Many Unangax^ still rely on the sea for their livelihood. Most live a subsistence lifestyle which includes fishing and hunting. It is believed that today the Unangax^ population is approximately 2,000. Unangam Tunuu is its own distinct language, indicating that there has been a long period of geographical, cultural and linguistic separation of its speakers from other Alaska Native groups. While it was once spoken throughout the region, today only about 109 individuals speak it fluently. At the time of foreign contact, Unangam Tunuu probably had multiple regional dialects—mutually intelligible, but with certain distinct features of structure and vocabulary. However, by the time the language was recorded in the early 1800s, only three of these dialects remained. The Attuan dialect was spoken by Unangan of the far western end of the archipelago, including those on Attu Island. The Atkan dialect was spoken in the central portion of the region, including today’s village of Atka, the last surviving traditional Unangan community in the area. From Umnak Island eastward to the Alaska Peninsula, the eastern Unangax dialect was spoken. The early Russian period was a devastating time for Unangax. By 1800, little more than 50 years after first Russian contact, the Unangax population had been reduced by some 80 percent. With population loss, came far fewer occupied settlements and the consolidation and relocation of many villages and a new religion, Russian Orthodoxy. By the later eighteenth century, even before the first Russian Orthodox priests had arrived from Russia, Unangax were being baptized into the church by Russian laymen, and Russian Orthodoxy quickly became the sole religion of the region and remains the majority religion today playing a key role in village life. The following organizations are dedicated to preserving Aleut history and culture:
The main difference between a drama and theatre is that dramas are written versions of plays, while theatres are animated renditions of play texts. Dramas and theatres both tell the story of plays, but the story exists only on paper in dramas, and only onstage during theatrical performances. Although they may cover the same topic, there are some fundamental differences between play versions that exist in plain text and those that are acted. As with novels and works of art, people reading a drama or enjoying a theatrical performance may have different interpretations of the play being described. As with films and movies based on books, there are often different interpretations of plays expressed through theatre performances instead of plain text. Shows are intended to be engaging and expressive and include several factors, such as lighting, characters, directors and designers. All involved in the theatre production process have different views of the play being transformed, and vary in their visions of how the written work translates to action. Dramas are typically longer than theatrical performances and contain considerably more detail. Another key difference between dramas and theatre is that dramas involve the direct interaction between author and readers, while theatre performances require people to interpret original works and describe them to audiences.
Philosophy and the Self The concept of a self serves a central function in Western philosophy and other main customs. There are three differentiated types of views about the self. These views include one move from the homo-economicus theory of Aristotelian descent and Kant’s idea of sensibly autonomous self. Independency of the first person from its social and biological atmosphere was theorized by Kant and Aristotelian’s view (Sihvola, 2008). There has been a proposal that is in contrast to these views, a perception that views the self biologically developing in a specific atmosphere. According to Organ (1987), the concept of the self took central position in the Western customs with the Descartes in the 17thC. Descartes emphasized the independence of the first individual because an individual can comprehend that he is in existence regardless of the type of the world in which he is living. That is, according to Descartes, the cognitive base of an individual’s thinking is autonomous on its environmental factors. Hence, factors like social status, upbringing, gender, race, among others are all extraneous to capture the concept of the self. Splane (2004) asserts that Kant developed the Cartesian viewpoint in the most essential and attractive manner. Kant held the view that every individual is an independent being with ability of imagining courses of action that surpassed any environmental association including emotional condition, race, customs, social status, upbringing, social, gender, among others. This type of notion of the independence of the self will afterward serve an essential function in the formulation of human rights. This is because every person is entitled to these types of rights specifically because of the respect that every human self values in as much as it is an independent agent (Splane, 2004). However, many different accounts have declined Kantian viewpoints over the past two centuries. They comprise one of the powerful and most appealing theoretical cores attributing a central function to the self. Every person is viewed by homo-economicus as an individual agent whose sole function for action is self-interest. Under this viewpoint, independence of humans is expressed better in the pursuit to accomplish one’s self desires (Sihvola, 2008). The center of theories of the self whose basis is on homo-economicus view each agent as a secluded system of partiality instead of one incorporated with its surroundings even though analysis in this case of origin of desires may promote the deliberation of environmental factors. In general, self is a development process that occurs in a specific environmental space. Hence, factors like social status, sex, formal education, gender, upbringing, emotional history, race, among others serves a function in shaping up a self. In addition, the self is dynamic, a body that is continuously in the making.
Fullerenes were first discovered by Harry Kroto in the 1970s, a feat for which he and his colleagues received a Nobel Prize in Chemistry. Recently, they have been found in winds emitted by red giants and in interstellar medium. Fullerenes are very potent antioxidants and are used in antiviral medications. In particular, fullerenes with anti-HIV properties have also been discovered. Apart from that, they are also used as semiconductors and even high temperature superconductors (if decorated with alkali metal atoms). Their sphere of use is constantly growing, and research is ongoing to find ways of mass production. So far, they are produced in near-gram quantities. One of the more popular methods is the graphite electrode arc process. It is hypothesized that in deep vacuum conditions with low density fullerenes can by synthesized in other yet unknown ways. A group of astronomers is currently engaged in studies of fullerenes in interstellar medium. Among them are KFU alumni Gazinur Galazutdinov (Catholic University of the North, Chile) and Gennady Valyavin (Special Astrophysical Observatory of the Russian Academy of Sciences) and current KFU employee, Associate Professor at the Department of Astronomy and Space Geodesy Vladislav Shimansky. Together, they contributed to a recent paper inMonthly Notices of the Royal Astronomical Society. The nearest interstellar clouds with confirmed fullerene presence are about 1,000 light years away from Earth. Electromagnetic spectra of 19 distant stars were provided by the VLT telescope in Chile, one of the largest in the world. The authors found fullerenes which left traces - absorption lines in certain frequencies. Dr. Shimansky comments, "We know for sure which frequencies have lines of fullerenes, but the main difficulty is to separate the interstellar medium spectrum from the star spectrum. That's why we can obtain fullerene lines by 'subtracting' star spectra from the existing spectrum, and that's a complicated process. Firstly, we discovered some parameters of stars, and some of these stars are unique objects." "We compare fullerene-bearing clouds with non-fullerene clouds to find out which environmental parameters capacitate the formation of such molecules. In our research, we found that in some clouds the molecules are in an excited state, and in some they are not. This leads us to believe that the ways of their formation are different."
Visual Teaching Strategies for the Classroom Visual Teaching Strategies for the Classroom Consider these facts: FACT: Approximately 65 percent of the population is visual learners. FACT: The brain processes visual information 60,000 faster than text. FACT: 90 percent of information that comes to the brain is visual. FACT: 40 percent of all nerve fibers connected to the brain are linked to the retina. FACT: Visual aids in the classroom improve learning by up to 400 percent. "How smart are you?" is now irrelevant. A more powerful new question is, "How are you smart?" How many of us remember when a calculator and typewriter were considered the height of technology? We are in the midst of a profound paradigm shift. We are moving from a period in which the language of production and manufacturing dominated our way of seeing the world to a time when ideas about information and communication shape our discourse. Some philosophers argue that we are actually in the midst of an even deeper change–one in which the pendulum of worldview is swinging from a more masculine, word-based culture to one that is more feminine and image-based. It is hard to argue with the observation that the generation of students now moving into and through our educational system is by far the most visually stimulated generation that system has ever had to teach. Research shows us that 65% of our students are visual learners. Having grown up with cable television, video games, computer software for education and entertainment, and the Internet, our students are truly visual learners coming of age in an increasingly visual world. Notwithstanding individual differences in intelligence and learning style, this generation of students needs to be taught the way they best learn–with visual stimulation accompanied by active learning strategies. As educators, we need to recognize the nature of our students and prepare them for the world in which they will live and work. We must allow this understanding of the visual nature of our students to influence our teaching techniques and the educational technologies we employ. We need to become Visual Teachers. Who Are Visual Teachers? 1. The Visual Teacher is an educator who embraces and models full spectrum visual literacy. The Visual Teacher understands the effects of visual stimulation on brain development and utilizes imagery where appropriate to enhance learning. The Visual Teacher understands the underlying concepts of visual literacy: * Imagery communicates in an emotional and pre-rational style that can bypass logical thought * Imagery invokes the part of our brain that assembles symbols and visual elements into stories The Visual Teacher actively encourages students to decode still images, such as documentary or advertising photography, and moving images, such as commercials, newscasts, dramatic or comic television programs and films. The Visual Teacher explores with students the signs and symbols in art and visual media. The Visual Teacher encourages students to encode or make more effective still images through an understanding of passive, neutral, and active imagery. 2. The Visual Teacher utilizes graphic, image-rich technologies in his or her teaching. The Visual Teacher is proficient in the basics of contemporary image-making: * Creating an original image * Transforming an existing image from one format to another (print to digital/digital to print) * Modifying an image * Saving, storing, and archiving images * Transferring images electronically * Reproducing images The Visual Teacher understands the advantages and disadvantages of various visual technologies and uses them appropriately. 3. The Visual Teacher avoids passive learning experiences by bridging " seeing" and "doing" using appropriate projects, activities, and technologies. The Visual Teacher creates lesson plans and activities that reflect the Six Methods of Visual Learning, acknowledging that when we create and utilize images we will most likely be working in one (or some combination) of the following modes: The Visual Teacher responds to student image-making, evaluating effectiveness based on criteria that corresponds to the Six Methods of Visual Learning: * Did you discover something new (external)? * Did you record your observation faithfully and accurately? * Did you manifest an idea, thought, or feeling in visual form? * Would a viewer "get" the idea, thought, or feeling you have expressed in visual form? * Has your image changed a viewer’s mind or influenced his or her behavior? * Did you discover something new (internal)? The Visual Teacher creates assignments and activities that allow students to develop and apply their Visual Information Handling Skills: * The ability to organize images for effective display * The ability to establish visual criteria and arrange images in a visual database * The ability to substitute images for words and establish a visual language * The ability to combine images with text to share ideas more effectively * The ability to integrate images with live presentations to communicate more powerfully * The ability to alter, manipulate, or transform existing images to envision something new This information is courtesy of the Visual Teaching Alliance, www.visualteachingalliance.com.
A loquat tree (Eriobotrya japonica) produces small fruit, under the right circumstances. Climate, tree age, growing conditions and available pollinators are all factors in whether a loquat tree bears fruit or not. But even without fruita loquat tree makes an attractive ornamental. It is hardy in U.S. Department of Agriculture plant hardiness zones 8 through 11. Fruit Bearing Age A fruitless loquat tree might simply be too young to produce a fruit crop. There are two ways to propagate a loquat tree: from seed or by grafting. A loquat tree grown from seed can take eight to 10 years to produce a crop of fruit. Grafted trees take considerably less time bearing fruit and can produce a crop at 2 to 3 years old. Most commercial growers propagate loquat, and other fruit trees, by grafting. If you buy a tree from a nursery, it start producing a year or two after planting. Climate is an important factor in loquat tree fruit production. If temperatures drop below 20 degrees Fahrenheit when the flowers are in bud, they will die prematurely and fail to produce fruit. Once fruit starts to develop, a drop to 25 F will cause the fruit to die and fall. Loquat trees can tolerate temperatures as low as 12 F, but the tree might not bear fruit in that year. While some loquat tree varieties are self-fertile, many are not. A self-fertile tree has both male and female flowers and will produce fruit on its own. A tree that is not self-fertile will only produce when planted near enough another loquat tree for cross-pollination to occur. A tree that is only partially self-fertile will produce without being paired with another tree, but fruit production will be reduced. Bees are the primary pollinators of loquat trees, so if there are no bees, you should expect a small crop. Depending on the variety, a loquat tree will produce fruit with white flesh or orange to yellow flesh. A partially self-fertile cultivar, "Tanaka" produces fruit with orange skin and yellow-orange flesh. Another partially self-fertile variety grown for its sweet-flavored cream-colored fruit is "Advance." The "Pale Yellow" cultivar is partially self-fertile and with white flesh and is slightly acidic. A fully self-fertile variety, "Thales" has orange flesh with a hint of apricot taste. - Andy Sotiriou/Photodisc/Getty Images
The title of this lesson is a bit non-standard in comparison to the titles of most of the other lessons, so let me explain it a bit. We’ve seen in lesson 16, lesson 17, and lesson 18 a few of the various ways that we can take two sets and form a new, third set using the information of the first two. A natural question to ask, then, is what we can do if we’re handed three sets. Or four sets. In other words, if the enemy hands me three sets, am I able to make a fourth set from the information of the first three? If the enemy hands me four sets, can I make a fifth? And so on. Maybe you already see where the title of this lesson is sort of coming from. If the enemy hands me a thousand sets, can I make a “thouand and one-th” set? The answer turns out to be, perhaps unsurprisingly, yes. What’s a bit more remarkable, however, is that oftentimes in order to build this “thousand and one-th” set, we really only need the machinery that we’ve invented for dealing with two sets. This means that if I want to “combine” the information of an arbitrary number of sets, or “construct” a new set from an arbitrary number of old sets, I really only need to know how to do so for two sets! Before I actually describe what’s involved when it comes to sets and the constructions we have so far, let me first comment on how general of a phenomenon this is in mathematics. Mathematicians spend a lot of time “building new things from old things” (and by “things”, of course, I don’t mean teletubbies or baseballs, but rather mathematical structures). In order to do this, though, they need to define precisely what it is they mean by “building new things”, in a similar way to how we’ve defined unions and intersections. But our definition of unions (and intersections) only applies when we have two sets to take the union (or intersection) of. It turns out (as we’ll see below) that in order to take the union of 3 (or more) sets, we only need to know how to take the union of 2 sets. And it also turns out (as we’ll see throughout many of the coming lessons) that many* mathematical constructions behave this way—i.e., doing something once (taking a union, for example) is all we need to know how to do in order to do something a thousand (or more) times. Since this ability to “extend” our constructions to more objects is in no way guaranteed by the definitions of the constructions themselves, we do indeed need to check (or prove) that our constructions extend in this way. Let us therefore get on with doing this checking with what we’ve defined so far. Suppose we have three sets A, B, and C, and suppose we want to do something to them analogous to “taking the union of all three”. Notice that I’m forced to use scare quotes because we have yet to discuss what “taking the union of three sets” actually means. We’ve only defined what it means to take the union of two sets, so one might be tempted to make a new definition for how to take the union of three sets. But if we did this, we’d quickly run into trouble, because we surely want to be able to take the union of 4 sets, and 5 sets, and 6 sets, and we surely don’t want to have to come up with an infinite number of definitions to account for all of these constructions. The old and jaded professor might be a little bummed out by these realizations and might consider leaving the academic world and spending his years traveling and hiking instead of figuring out how to get around an infinite number of new definitions. But the energetic student sees a way out! Sure, she doesn’t know how to union three sets at once, but she does know how to take the union of two sets to form one set. Suppose she does this to A and B, thus forming the single set . She now has two sets left, and C. Well, she knows how to union two sets! So she unions these two sets to form (note the parentheses that I’m forced to use to remind ourselves that she union-ed A and B first to form the single set ). The student looks at the professor and says, “see?! I told you I could do it!”. But although the professor might be jaded, he’s still sharp, and he points out that the student could have also just as easily constructed and/or . I.e., she could have first union-ed A and C, and then union-ed this new set with B, or she could have first union-ed B and C, and then union-ed this new set with A. All of these are equally good candidates for “the union of three sets A, B, and C”, and so which one is she to use? Alas, we’ve reached another impasse. The student goes home bummed out that her fields medal might be farther away than she thought… But while at home, drowning her disappointment in hot chocolate and Reddit memes, she has an idea. What if all of those ways of “union-ing” three sets together were somehow the same? What if it didn’t matter which way of union-ing we chose, because they’d all give us the same final set? After a couple more links in /r/funny, the student sits back and thinks of the following. What are the elements in ? By the definition of a union, these are the elements either in C, or in “A union B”. But what are the elements in “A union B”? These are precisely the elements that are either in A or in B. Thus, the elements in are precisely those elements that are either in C or “either in A or B”. Similarly, the elements in are precisely those elements that are either in B or “either in A or C”. But clearly these are all the same elements, because they’re all “either in A or B or C”! Checking the last case, , convinces the student that indeed, these three sets are all exactly equal. It doesn’t matter in which order she unions these three sets, because the final set will always be the same! Unfortunately (or fortunately) this accomplishment is not warranting of a fields medal, but the student does have a good time explaining it to her professor (who clearly needs to brush up on his set theory). The student isn’t done, however. She continues to plow forward, and asks the same questions about intersections: is it the case that ? Again, the answer is yes. Let us see why. is the set of elements that are in C and in “A intersect B”, and “A intersect B” is the set of elements that are in A and B. Thus, is the set of elements that are in A and B and C! The exact same logic (of breaking these sets down “one intersection at a time”) applied to the other two sets shows that all of these sets are indeed the set of elements in A and B and C, and so they’re all equal! This means that we can now talk about the union (or intersection) of an arbitrary number of sets without having to introduce new definitions for all of them. All we do is “union them up one at a time”, and we know that this is a sufficient prescription because the order in which we “union them up” is irrelevant—all different possibilites are equivalent! Thus, in order to take the union of a thousand sets, call them , we simply start union-ing them up one at a time, in any order, and we know that whatever order we pick the result will be the same (because at each step we know that the order is irrelevant because at each step we’re simply union-ing a third set onto two sets that were union-ed, which we know is order-independent). Therefore we can unambiguously write the expression (which stands for the single set formed as the union of all thousand of these sets), without having to worry about where we put the parentheses because we know that the order that we “union them up” doesn’t matter. Let me end this lesson with two quick comments. First, everything that we established in the above paragraph applies for a “thousand-fold” intersection as well, simply because we’ve already established the “three-fold” intersection property and because we’ve shown that all we need to show is the three-fold property (the rest follows from the three-fold case). Secondly, this is a good time for me to introduce some more notation. This is a great bit of notation because it is a perfect example of something that might have been scary had we not known what’s really going on, but now that we do know what’s going on it shouldn’t be scary at all. Suppose we’re given a thousand sets, labeled as those above: . It’d be nice to have a notation for the “thousand-fold union” of these sets without always having to write the tedious expression . This is not ideal notation for a couple of reasons. First, it’s not ideal because it’s ambiguous: where do I put the “…”? After the third set? After the fifth set? It clearly doesn’t matter as long as we know what we’re talking about, but it’s still annoying (ambiguity is the mathematician’s kryptonite). It’s also not ideal simply because it’s long. So here’s a better way. Let’s let this set be denoted by The union sign is still in there, and the notation is simply telling us to union all of the sets labeled by , for some “i”, where “i” runs from 1 to 1,000. That’s not so bad at all. We’ve gotten rid of any ambiguities, and now all we need to write is instead of for the same abstract idea! Similarly, we could write for the “thousand-fold” intersection of these sets. It should also be mentioned that in this new notation, the “i” is what’s called a “dummy variable” because it doesn’t matter at all which letter or symbol we use in its place. The “i” plays absolutely no role here. I could just as easily (and truthfully) have written or or for the exact same set. Note also that we could let the numbers “1” and “1,000” be anything they want, but that it just so happens that in our case our sets were labeled from 1 to 1,000, and so these were the numbers we had to go with (if the sets were labeled differently, we’d have different numbers in our expressions). That’ll do it for this lesson, and I’ll pass on putting in any exercises because a) I can’t think of any good ones for these ideas and b) this lesson is already pretty long. In the next lesson we’ll be able to use a lot of what we’ve learned to look at ideas from basic high school or middle school math in a whole new light. This new light will be a much more abstract one, and I think it is precisely this abstraction that helps to clear up what we’ve “really” been doing all along in middle and high school math! I’ll try to make these parallels to more familiar math as much as possible, so as to point out that the kind of math that we’re learning now has always been and will always be “behind the scenes” of what is more “conventional”. The next lesson is just the start! *But not all. We don’t discuss Cartesian products or disjoint unions here, and that’s because these constructions are a bit more subtle (primarily because we’ve had to change the elements themselves to discuss them). We will, however, address these issues in due time.
Diseases and Conditions The tricuspid heart valve separates the right atrium from the right ventricle and allows deoxygenated blood to flow between them. Once blood has cycled through the body, it is returned to the right atrium, which is located on the right side of the heart. When the right atrium is full, the tricuspid heart valve opens to allow the deoxygenated blood to flow into the right ventricle. The right ventricle then fills with the deoxygenated blood. As pressure changes in the right atrium and right ventricle, the tricuspid heart valve closes. The right ventricle then contracts and pumps the deoxygenated blood through the pulmonary valve and into the lungs, where the blood is oxygenated.
⌚ Used Skills - Competencies and in a Gained E and The History Of Telecommunications - Add in library Click this icon and make it bookmark in your library to refer it IT 142 Downloads | 28 Pages 6,993 Words. Telecommunication takes place when information is exchanged between two entities. This exchange of information includes some technology for a smooth transmission of information. Communication technology uses several channels to transmit information in a form of electric signals. Signals are transmitted solving Graduate Capabilities Assuring - Problem the form of electromagnetic waves via physical medium, such as signal cables. There are so many technologies are involved in the present trend of telecommunication technology, those have introduced the wireless medium for communication. Modern technologies are using telephone, networks, microwave transmission, Teleprinter, fiber optics, communication satellites and fiber optics. Early period of communication took place with the visual signals over a distance, for example, smoke signals, signal flags, beacons, heliograph and semaphore telegraphs. Even the pre-modern activities of long distance communication were taking place by coded drumbeats, loud whistles and lung-blown horns. Today’s telecommunication technology has involved three primary requirements to facilitate the telecommunication in several form, those basic three parameters are –Transmitter, that plays an important role by taking the information as input and converting them in the form of signal as an output. A transmission medium is also known as physical channel that is used to carry the signal. A receiver is used as a third component to take the signals as an input and convert them back in usable information. For example the radio broadcasting stations are having large power amplifier as a transmitter. Sometimes, telecommunication systems are performing as a two-way system, i.e. they are in a duplex form where they act as both a receiver or a transceiver and transmitter. Cellular telephone is the best example of transceiver. Telecommunication with a fixed line is known as point-to-point communication due to its communication between one receiver and one 13650759 Document13650759. Telecommunication with radio broadcast is called broadcast communication between one powerful transmitter and several low-power radio receivers. Telecommunication with multiple transmitters along with multiple receivers and a single channel that is shared by this multiple receiver and transmitter are known as multiplex systems. That in turn becomes able to reduce the cost by sharing one single channel for different user in a single network. Telecommunication occurs when information is exchanged between two entities. This exchange of information includes Thinking Defining Critical technologies to have a smooth exchange of information. Communication technology uses several types of channels to transmit information in a form of electric signals. Signals are transmitted in the form of electromagnetic waves via physical medium, such as signal cables. There Prof Paul During Psychological Psychological Managing Poisat the Recession Contract a so many technologies are involved in the present trend of telecommunication technology. Modern technologies are using telephone, networks, microwave transmission, Teleprinter, fiber optics, communication satellites and fiber optics. A telecommunication network is a set of links, intermediate nodes and terminal nodes those are connected to enable the telecommunication between terminals. Nodes are connected among themselves with the help of the transmission links. To the pass the signal among nodes via transmission links, nodes use circuit switching, packet switching and message switching. Hence the signal can pass away through the correct link to the correct destination. There are various network infrastructures are available, such as, LAN, WAN, MAN. When (VERSION C) MATH APPLIED COMPLEX 3160: VARIABLES FINAL EXAM the WAN connected to give internet services, Application Form List Waiting is called WWW. Each of the terminals in the network uses unique address for them so that the message or signal can be routed through the correct path and reach to the correct destination. Telecommunication networks are- The Internet, Computer network, Telephone network, Aeronautical and global Telex network(., 2014). Telecommunication service provides facility of Review Answers 2 Exam by means of emission, transmission, or by receiving signs, signals, images, writing, sounds or information or intelligence via wire, radio or electromagnetic transmission. The service user of telecommunication technology is mainly responsible information content of the message. The service provider of the telecommunication technology is accountable for the receiving, transmitting and making delivery of the messages to the intended receiver. Sender sends the voice message, which is an analog signal that is not possible to carry with the help of communication channel; in this scenario modulator modulates the analog signal into binary signal and on the other hand receiver of this signal also having devices to demodulate the signal into analog signal (human understandable)(Beasley, 2009). As popular communication medium telephony became the most popular way, where PSTN can be defined as a collection of voice-oriented interconnected public networks. It is also known as POTS (Plain Old Telephone Service). Figure 3: Telecommunication services. History of telecommunication took its first part using drum and smoke signals in Africa, in some parts of Asia and in America also. The first fixed semaphore system came into sight in Europe by 1790. However it stays able to make it appearance until the telecommunication system started appearing. Geek hydraulic systems were in use in 4 th century BC. The hydraulic semaphore was working with the vessels filled of water and the visual signs; these are utilized as optical telegraphs. During Middle Ages, as a relaying signal, a chain of beacon is used on the hilltops as a means of signals. Drawbacks of the beacon come into consideration when there was no choice except sending one bit information. Using pair of clocks, Claude Chappe, who was a French engineer has invented a method of communication with the help of the clocks hands, those were used to point different symbols. Though, it was not feasible for long distances. After that chapel revisited his model with the use of two sets of jointed wooden beams. Then he gave his contribution to build his first telegraph line between Paris and Lille.In the year 1794, Abraham Edelcrantz, a Swedish engineer, invented a developed model of Chappe’s system and there used shutter rather than wooden beams (Bello, 2012). After all this developments, experiments took place on the communication using electricity, though unsuccessful experiments initially started upon this in the year of 1726. Based on a less robust design of electrical telegraphy by Spanish scientist and polymath Francisco Salva Campillo, experiments on electrical telegraphy took place by the German anatomist, physician, and inventor Samuel Thomas Von Sommerring in 1890. Practically an electrical telegraph was proposed by William Whether the Institutionalizing Public Trust and Credibility: Challenges for SRM? Clark A. Miller in the year of 1837with an improved six-wire and five needle system. Subsequently, Samuel Morse introduced a more efficient version of electric telegraph in the year of 1837(Chang and Gavish, 1993). The invention of electric telephone took place in the year of 1870, considering harmonic telegraphs as the base of experiment. First commercial telephone services were introduced in the year of 1878 and 1879. Later advancement in technologies introduced voice communication using radio, in the DAY Education of Chronicle IN BLACKSBURG 04-23-07 DARK Higher of 1927. TEST Asia Guide & History Study of SW Religions there was no cable connection were established until the inauguration of TAT-1 with 36 telephone circuits, in the year of 1956(Chen, 2011). Started with a demonstration of wireless telegraphy by James Lindsay, in the year of 1832, several years of studies and experiments took place to introduce the successful commercial telegraph system based on radio transmission, in the year of 1894. In the year of 1900, human voice is successfully transmitted via wireless communication (Corena and Posada, 2013). With the invention of Video telephony it became possible to add live video to voice telecommunication. The overall concept of video telephony became popular in the year of 1870. As a communication medium U.S. satellite first came into consideration 1958, with project SCORE that was world’s first satellite and it was using a tape recorder in order to store and forward voice messages. In the year of 1960 Echo Satellite was launched by NASA. Telstar was the first direct and active communication satellite and launched in the year of 1962. Satellite these days are using so many application such as television, GPS, telephone and internet uses(De Marco, 2012). The configuration of mainframe with the remote dumb terminals and centralized computer configuration remained admired in 1950s. Tough its journey to be most popular was until the researchers have introduced the concept of packet switching in 1960. In packet switching the data were used to send in chunks to several computers. APPERNET’S development focused on RFC (request and Symmetry 5.2 Activity Reflections comment). In this process APPERNET merge with the several network in order to form Internet (Devillier, 1972). With the collective use of these technologies in present decade, it became possible to introduce the concept of public switched telephone network that has aggregated the circuit switched telephone networks. These are operated by the local, regional and public telecommunication. There are several components, such as fiber optic cable, telephone lines, cellular Budget Sales, communication satellites, under sea telephone cables and microwave transmission is used by the PSTN. All these components are interconnected via switching centers. The technical operation of the PSTN always remains to a predefined standard imposed by ITU-T. SUFFRAGE ARTICLE V standards are E-164 and E-163. These standards provide a single global address space for telephone numbers (Digital Forensics Processing and Procedures, 2014). PSTN is the collection of the interconnected voice oriented public telephone networks. DSL or Request Form Design 2015 Digital Subscriber Line is technology with a high bandwidth for small businesses and homes using a simple copper telephone line. There are several types of DSL, such as- HDSL, RADSL and ADSL (Dudin et al., 2013). Bid/Ask bounce Fosters P. - page home Dean Class: DSL is a collection of technologies that are used to provide the internet services by the transmission of the digital data using telephone lines(Ye et al., 2014). 11126926 Document11126926 the same telephone line a DSL service send data simultaneously due to its high frequency bands of sending data. In the context of customer, DSL connection is used for the simultaneous transmission of DSL services and voice. General bit rate of that is given to consumer ranges from 256 k bits to 100 M bits as a downstream rate(Domingo, 2011). ADSL is a type of DSL technology, ADSL makes the data transfer much faster with the use of copper telephone lines instead of a conventional Voiceband Modem. This technology makes data transfer faster due to the use of those unused frequencies in the telephone line for phone calls. A filter associate with the ADSL allows use of both the telephony and ADSL service at the same time. ADSL service can be circulated over small distances from the telephone exchange that is less than 4 KM (Dulek, 2014). Figure6: Asynchronous digital subscriber line. ISDN or the Integrated Services Digital Network provides high speed internet service. During 1990’s ISDN sparkled the speedy internet development between the service providers. Like its ancestor (Dial-up internet service) ISDN make utilization of phone lines. Incorporation of ISDN can be considered as the evolution between the dial-up and DSL. ISDN network service basically operates using dedicated line or circuit switched network. Hence ISDN line become more popular for high speed internet access at home and it is also the point the broad band service giving companies are competing(Dyrud, 2011). VoIP or the Voice over IP is group of technologies to deliver multimedia session and voice communication using Internet Protocol networks. There are so many terms associated with VoIP are Internet Telephony, IP telephony, Broadband phone service and Broadband Telephony. A vast development has been associated in the year of 2004, when VoIP services started utilizing existing broadband internet access. In this case subscribers started placing and II Copyright Infringement the telephone calls in the same way they were using PSTN (Einspruch, 2013). ATM or the Asynchronous Transfer Mode is a switching technology which is used in dedicated connection to organize digital data into chunks of 53 byte cell unit and transmitting them over wastes reactivity channel (physical medium) using digital United His Nations Ki-moon December of Secretary-General Excellency 2009 23, Ban the technology(Friess, 2010). Four key reasons why modern telecommunications services are based on digital technology: In This modern era efficient telecommunication services are based on digital technology due to its some fruitful objectives, such as- Digital signals are less protected to physical factors and noise. Those can cause by the interference and data loss. But, we will consider the analog signals; those are badly affected by the external interference in the transmission medium and signal attenuation or noise. Digital signals allow a uniform technique to encode the data, video, voice those flow via the transmission channel. Analog signals are not capable to be encoded using the same format as the digital signals. As it is transmitted in the form they have generated. Digital communication is capable to achieve higher capacity than the analog communication. Encoding an analog signal to digital allows minimizing the amount of information. Better securities can be achieved using many encryption techniques that is impossible in the context of analog communication. several encryption techniques are available there to encrypt the transmitted data. Technology of multiplexing: Multiplexing is nothing but a concept of sending signal or digital data streams BCL 90 scanner Barcode multiple sources that are aggregated and sent over a single line by sharing the capacity of the line and later splitting the signals as per the intended recipient. There are two kinds of multiplexing, digital multiplexing and analogue multiplexing. Multiplexing: multiplexer aggregate the signals and send via a single channel, whereas, Demultiplexer splits the stream of information or the signal as per the intended recipient. In this technique the aim is to share the expensive resources. For example different calls are carried out by the telephone lines using a single wire (IEEE Transactions on Networking publication information, 2004). Multiplexing technique is divided into two categories using digital technology and analog technology. Multiplexing techniques using analog technology are FDM (Frequency division multiplexing) and WDM (wavelength division multiplexing). Multiplexing technique using digital technology is TDM (Time division multiplexing)(Laborie, 2011). In analogue multiplexing, a medium is shared by the changes in the wavelength and frequency of various signals. It is an Fall2014CourseContract technology. FDM technique is achieved by combining several signals and sending them sharing one single channel and splitting into different frequency ranges(Wuyts et al., 2010). FDM’s most common application is television and radio broadcasting from satellite stations, mobile or terrestrial location using cable television or the natural earth atmosphere. Only a single cable is reachable to the user’s area but the service provider provides multiple signals simultaneously without interference. To receive appropriate signals users must be tuned with particular channel (frequency)(Lalanne and Maag, 2013). Wavelength division multiplexing: Wavelength division multiplexing or WDM is used in fiber optic communications. This technology multiplexes a number of optical carrier Signals over a single optical fiber using several wavelengths. This technique allows bidirectional communication with multiplication of capacity using one papers George MSS.236 Beatty F. of fiber. A WDM multiplexing techniques uses a multiplexer as a transmitter and demultiplexer at the receiver to split the signals for the intended destination (Li and Wang, 2015). Time division multiplexing is a way of transmitting and receiving signals independently over a single common channel by means of synchronized switches at the end of the each transmission line. Hence, each of the signals come only one fraction of time with an alternative pattern. Use of Time Division multiplexing takes place basically for the digital signals, but here comes the similar scenario when SUMMARY Revised: AN: PRODUCT October 2013 00742/2013 OF analog signals are transmitted and received sharing a single channel. The contrast between digital technology and analog technology lies when digital signals are physically taking turn to flow over one communication channel and taking equal and fixed time slots for BEHAVIOR VARIES MEMBER REELECTION BETWEEN HOW sub channels. Sub channels are mainly created before applying synchronization (Liu, Jiang and Zhang, 2014). Networks that 11908131 Document11908131 modern telecommunication technology (digital carrier systems, backbone transmission networks, optical networks): In telecommunication system, numbers of individual channels are multiplexed onto a multichannel 505-507, Copyright & 1995 pp. Conput. Graphics, system in order get an effective transmission. Transmissions take place between nodes of a network. Circuits are multiplexed or demultiplexed accordingly. Backbone transmission network or backbone network is a part of computer network that allows several network infrastructures to interconnect. This network infrastructure provides path to exchange information among several sub networks or LANs. Large Corporation that has many locations with different network can have a backbone transmission network, for example, in an organization several server clusters are there accessed by different departments those can be stored at different location. Optical networks are telecommunication network with a high capacity. Optical networks are based on optical component and technologies that provide grooming, restoration, routing at the wavelength level as well as wavelength-based services. b) Discuss and evaluate these networks as they apply to modern telecommunications: Digital carrier system helps in digital signaling to Sectional – Views EG07 35 telecommunication service. Telephone companies use to multiplex long distance calls over high-speed trunks. In telecommunication original signal can be in analog form that needs to convert into binary format suitable for the transmission channel or the original signal can be in binary form from the beginning. To transmit the binary signals over a single channel the competence of a single channel is divided into several sub channels (Wolfe, 2015). Hence, fraction of fixed and equal time length can be allocated for each of the repeating pattern for each signal. This technique is known as time division multiplexing, as it has discussed earlier. Binary signals, consists of pulses that can represent only 0 or 1. The backbone transmission network tied the entire department server together. As per the architecture there are four classifications on backbone transmission network. They are distributed backbone transmission network, collapsed backbone transmission network, parallel backbone transmission network and serial backbone transmission network (Wan, Diouris and Andrieux, 2010). Distributed backbone transmission network is consists of numbers of connectivity devices, that are connected to a numbers of central connectivity quality specific Protein in a without control translocation like switch, router or hub. In Collapsed backbone transmission architecture each of the hub allows a link back to the central location to be connected to a box that can be switch or router, known as backbone-in-a-box. That architecture related to a collapsed backbone is a rooted tree or star architecture. Parallel backbone architecture is a distinction of collapsed backbone network where it uses central node as a connection point. Differences reflect when the parallel backbone allows Sheet Test Study connections with the presence of more than one switch or router. With the availability of more than one cable devices, they are connected to have an enterprise wide network. Serial backbone transmission network consist of two or several inter networking devices, those are connected to each other with the help of single chain. In this scenario hub, router or the switches are connected in this fashion in order to extend the network (Moussavi, 2012). As a most promising technology of Optical networks is the SOA (Semiconductor optical amplifier). With the integration of amplifier functionality into semiconductor material, the basic component can perform several applications. Integrated functionality of SOA provides integrated functionality of routing and internal switching functions, required for a feature-rich network. c) Key wireless technologies in order to meet the customer requirement: The term wireless comes into the picture when the communication takes place over a distance without the requirement of cable, wire or any other electrical conductor. As an important medium, wireless communication allows flow of data form source to destination. After setting the communication, information is transmitted over the air without the use of cables. Wireless communication involves electromagnetic waves like satellite, radio frequencies, infrared etc. to transfer the data. With the growing trend of wireless communication technology, havoc use of several GROSS INCOME INCOME vs. PERSONAL BASICS NET Budgeting devices allows user to communicate irrespective of their place. Devices used in telecommunication technologies are cordless telephones, Exam and 2 Regents UNIT Domain A2.A.39: Page Range Questions units, mobiles, satellite television, and wireless parts of computers (Murthy, 2010). Satellite communication is one of the popular wireless technologies. Using this technology anyone in this the world can communicate with others remotely. The satellites, those are in use for the communication are directly communicating with the orbiting satellites over radio signals. Modems and portable satellite phones are having most powerful broadcasting abilities as they have high range more than cellular devices (Optical Switching and Networking, 2013). Global Positioning System or the GPS is a satellite navigation system in order to provide information on location, weather condition and time anywhere near or on the earth with the line of sites to four or more GPS satellites. Concept of GPS system is based on time. Satellites carry stable synchronized clocks that are synchronized with the ground clock and with each other. On the other hand satellite locations are monitored preciously. Despite of having clocks, GPS receivers are not synchronized with the original time Application Form List Waiting are unstable to some extent. GPS satellites are involved in continuous transmission of their current position and time.A GPS receiver monitors several satellites and for superintendents PowerPoint slides in solving equation for the determination of exact location of the receiver and its variation form actual time. The structure of composed of space segment, user segment and control segment (Virmani, 2014). The GPS satellites do broadcasting of signal form space and each of the GPS receivers is involved to calculate time and three dimensional location of its own, such as altitude, longitude and latitude. In today’s world GPS have so many uses. GPS is great way to find and keeping track of the every certain thing we want. A government and private organizations have a decision making phase, where they need Global Positioning system in order to gather data and to take an effective decision. Data collection system allows decision maker to have appropriate positional data about a product along with descriptive information. It became possible to analyze many environmental problems based on the position data collected with the help of GPS. With the use of GIS software the collected data from the GPS can be imported into it. Hence information can be analyzed without complicating a particular situation. In telecommunication mobile phones are connected with the wider telephone network via base stations. In computer the transceivers act as router to connect different LAN or/and the Internet (Prasad, 2011). There are several wireless broadband technologies are there those offer fast web surfing with the wired connection of data through DSL or cable. Here the example of WiMAX comes. Despite of the ability to provide high data rates up to 30 megabits, WiMAX provider generally provide average data rate form the 0 to 6 megabits per second. The actual data cost by Exercise Suggested Groups and solutions 114 Rings 2008–09 5 to set may vary with respect to the distance from the transmitter. WiMAX is similar to the version of wireless 4G (Primak and Kontorovich, 2012). Wi-Fi is a well-known form of transmitting low power wireless communication. Now a day electronic devices are involved in havoc use of Wi-Fi. To set up a Wi-Fi, a wireless router is responsible to act as communication hub. These kinds of networks are limited in a range due to low power emission for data transmission. (Veltsos, 2012). User with close proximity to a signal repeater or the router can able to accesses the internet. As a common home networking applications, Wi-Fi allows portability without the requirement of cables. For the prevention of unauthorized access there are two types of wireless securities are imposed in order to secure the wireless communication. Those are WEP (Wired Equivalent Privacy) and WPA (Wi-Fi protected access). WEP is a standard of IEEE 802.11 since 1999, which was outdated by the WAP in the year of 2003. The United His Nations Ki-moon December of Secretary-General Excellency 2009 23, Ban the security standard is WAP2 that encrypts the network using 256 bit key. WAP2 is an improved version of WAP that allows longer key length improving security over WEP(Rashed, 2013). Bluetooth technology enables several electronic devices to be connected Physiology Final Review Anatomy & each other wirelessly. This technology helps to share data. Wireless keyboard, mike and mouse are connected to laptop; in order to transmit information form one device as input to another device that receives the data as output(Tiarawut, 2013). Bluetooth technology makes use of radio waves to have communication between several devices. According to the official website of Bluetooth, maximum range of 50 feet is used by the Bluetooth due to its low-power signal. The pairing process among two devices makes it easy to identify the connected and paired devices. Hence it became possible to prevent inference from the other non paired devices(Reinsch and Gardner, Mexico area, New the Socorro Guidebook to wireless is the well known use of the wireless technology in devices that convey data through IR radiation. It is a wireless mobile technology that is used for communication among devices over the short range. IR communications have major limitations due to its requirement of line-of-sight with short transmission range and not able the guest analysis, Literary go through walls. Transceivers used for IR are cheapin terms of cost and allows a short-range communication solutions. Device, those are enabled for IR are known as IrDA devices. These devices maintain the standardsimposed by the Infrared Data Association or IrDA. To pass the IR signals through lens and focus into beam of Infrared Data, Infrared lighting emitting diodes are used. For the purpose of data encoding, the beam source is switched off and on in a random manner for data encoding. The IrDA receivers and transmitters are categorized into non-directed and directed(Reinsch and Gardner, 2013). In Organic Dairy Northeast Profitability Term Farms Long of world mobile telephony has changed the way to have communication from the life of ordinary people to business organization. With the improved social inclusion, communication, productivity and economic activity in the sectors of health, agriculture, finance and education, there is a great impact of mobile communication. With the advancement in the technology, 3G and 4G services are accessed via the smart phones, dongles and tablet. All this advancement put a great impact upon the organizational activity between the business and consumer. With the other advancement in telecommunication technology there is a new emerging technology that takes advantage by combining the two fields for the further improved. These two fields are mobile and speech recognition. Combination of these two technologies enables users to talk to their mobile devices and the device is able to understand the oral commands and then accomplish the task (Reis Pinheiro, 2011). With the increased use of mobile, the convergence of mobile and speech recognition was expected. The idea of speech recognition comes into picture when mobile devices are not suitable due to its small screen size limitations. Small screen limitation of mobile phones can raise problem, when a user attempts to put data Design territory Territories Sales Territories Sales their mobile device. As an alternative, mobile devices can have voice-enabling applications installed by the company of that particular device. Such as Google talk application in smart phones now a day (Reversing the Trend, 2010). Business organizations need to be aware about the two dimension of computer communication software: The application software that will be in use for the terminals and the interconnection between the several terminals (Salmon, 2012). Digital advertisement and market venturing: In this modern era for CPR12026 Line of Business Request New Proposal – Solution business organization can compete without the use of the web service and internet. The web provides the means to communicate with the customers, supplier with the incorporation of ecommerce. Time and cost reduction: In a business organization the focus is on the TCP/IP protocol suit mainly that is now in use universally. TCP/IP protocol suit is used across multiple vendor equipment. It is the base of operating using Internet. With the growing application of internet the distributed application of internet is in use in order to access the business data from the virtual platform remotely. Even a new approach is introduced recently with the implication of Intranet. An Intranet provides same sort of application and interfaces as found using internet (Smaini, 2012). Even advancement in technologies for e-marketing, such as text messages to market a product or use of mining tools on the websites to market similar product with respect to a purchased product. These scenarios reduce the overall cost and time to grow the overall business. Growth of communication industry: The development of communication technologies has allowed so many new business strategies across the globe. All of these business organizations are providing video and voice service. There are rise of Internet service provider, who give service to the internet user, such as broad band connection, VPN services and so on. Raise of BPOs or Business Process Outsourcing: Using today’s trend of communication technology, it become Process I Engineering Software for the organizations to be interconnected even they are also using distributed networks to virtually access the organization’s data. Organizations can outsource the skills and required expertise from other organization or country.
This article needs additional citations for verification . (March 2010) (Learn how and when to remove this template message) Electromagnetic shielding is the practice of reducing the electromagnetic field in a space by blocking the field with barriers made of conductive or magnetic materials. Shielding is typically applied to enclosures to isolate electrical devices from their surroundings, and to cables to isolate wires from the environment through which the cable runs. Electromagnetic shielding that blocks radio frequency electromagnetic radiation is also known as RF shielding. An electromagnetic field is a physical field produced by electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature. In physics and electrical engineering, a conductor is an object or type of material that allows the flow of an electrical current in one or more directions. Materials made of metal are common electrical conductors. Electrical current is generated by the flow of negatively charged electrons, positively charged holes, and positive or negative ions in some cases. Radio frequency (RF) is the oscillation rate of an alternating electric current or voltage or of a magnetic, electric or electromagnetic field or mechanical system in the frequency range from around twenty thousand times per second to around three hundred billion times per second. This is roughly between the upper limit of audio frequencies and the lower limit of infrared frequencies; these are the frequencies at which energy from an oscillating current can radiate off a conductor into space as radio waves. Different sources specify different upper and lower bounds for the frequency range. The shielding can reduce the coupling of radio waves, electromagnetic fields, and electrostatic fields. A conductive enclosure used to block electrostatic fields is also known as a Faraday cage. The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and the frequency of the fields of interest and the size, shape and orientation of apertures in a shield to an incident electromagnetic field. In electronics and telecommunication, coupling is the desirable or undesirable transfer of energy from one medium, such as a metallic wire or an optical fiber, to another medium. A Faraday cage or Faraday shield is an enclosure used to block electromagnetic fields. A Faraday shield may be formed by a continuous covering of conductive material, or in the case of a Faraday cage, by a mesh of such materials. Faraday cages are named after the English scientist Michael Faraday, who invented them in 1836. Typical materials used for electromagnetic shielding include sheet metal, metal screen, and metal foam. Any holes in the shield or mesh must be significantly smaller than the wavelength of the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting surface. Sheet metal is metal formed by an industrial process into thin, flat pieces. Sheet metal is one of the fundamental forms used in metalworking and it can be cut and bent into a variety of shapes. Countless everyday objects are fabricated from sheet metal. Thicknesses can vary significantly; extremely thin sheets are considered foil or leaf, and pieces thicker than 6 mm (0.25 in) are considered plate steel or "structural steel." A metal foam is a cellular structure consisting of a solid metal with gas-filled pores comprising a large portion of the volume. The pores can be sealed or interconnected. The defining characteristic of metal foams is a high porosity: typically only 5–25% of the volume is the base metal, making these ultralight materials. The strength of the material is due to the square-cube law. In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. It is thus the inverse of the spatial frequency. Wavelength is usually determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wavelength is commonly designated by the Greek letter lambda (λ). The term wavelength is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. Another commonly used shielding method, especially with electronic goods housed in plastic enclosures, is to coat the inside of the enclosure with a metallic ink or similar material. The ink consists of a carrier material loaded with a suitable metal, typically copper or nickel, in the form of very small particulates. It is sprayed on to the enclosure and, once dry, produces a continuous conductive layer of metal, which can be electrically connected to the chassis ground of the equipment, thus providing effective shielding. Copper is a chemical element with symbol Cu and atomic number 29. It is a soft, malleable, and ductile metal with very high thermal and electrical conductivity. A freshly exposed surface of pure copper has a pinkish-orange color. Copper is used as a conductor of heat and electricity, as a building material, and as a constituent of various metal alloys, such as sterling silver used in jewelry, cupronickel used to make marine hardware and coins, and constantan used in strain gauges and thermocouples for temperature measurement. Nickel is a chemical element with symbol Ni and atomic number 28. It is a silvery-white lustrous metal with a slight golden tinge. Nickel belongs to the transition metals and is hard and ductile. Pure nickel, powdered to maximize the reactive surface area, shows a significant chemical activity, but larger pieces are slow to react with air under standard conditions because an oxide layer forms on the surface and prevents further corrosion (passivation). Even so, pure native nickel is found in Earth's crust only in tiny amounts, usually in ultramafic rocks, and in the interiors of larger nickel–iron meteorites that were not exposed to oxygen when outside Earth's atmosphere. A chassis ground is a link between different metallic parts of a machine to ensure an electrical connection between them. Examples include electronic instruments and motor vehicles. Electromagnetic shielding is the process of lowering the electromagnetic field in an area by barricading it with conductive or magnetic material. Copper is used for radio frequency (RF) shielding because it absorbs radio and electromagnetic waves. Properly designed and constructed copper RF shielding enclosures satisfy most RF shielding needs, from computer and electrical switching rooms to hospital CAT-scan and MRI facilities. Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Radio waves have frequencies as high as 300 gigahertz (GHz) to as low as 30 hertz (Hz). At 300 GHz, the corresponding wavelength is 1 mm, and at 30 Hz is 10,000 km. Like all other electromagnetic waves, radio waves travel at the speed of light. They are generated by electric charges undergoing acceleration, such as time varying electric currents. Naturally occurring radio waves are emitted by lightning and astronomical objects. One example is a shielded cable, which has electromagnetic shielding in the form of a wire mesh surrounding an inner core conductor. The shielding impedes the escape of any signal from the core conductor, and also prevents signals from being added to the core conductor. Some cables have two separate coaxial screens, one connected at both ends, the other at one end only, to maximize shielding of both electromagnetic and electrostatic fields. A shielded cable or screened cable is an electrical cable of one or more insulated conductors enclosed by a common conductive layer. The shield may be composed of braided strands of copper, a non-braided spiral winding of copper tape, or a layer of conducting polymer. Usually this shield is covered with a jacket. In geometry, coaxial means that two or more three-dimensional linear forms share a common axis. Thus, it is concentric in three-dimensional, linear forms. The door of a microwave oven has a screen built into the window. From the perspective of microwaves (with wavelengths of 12 cm) this screen finishes a Faraday cage formed by the oven's metal housing. Visible light, with wavelengths ranging between 400 nm and 700 nm, passes easily through the screen holes. RF shielding is also used to prevent access to data stored on RFID chips embedded in various devices, such as biometric passports. NATO specifies electromagnetic shielding for computers and keyboards to prevent passive monitoring of keyboard emissions that would allow passwords to be captured; consumer keyboards do not offer this protection primarily because of the prohibitive cost. RF shielding is also used to protect medical and laboratory equipment to provide protection against interfering signals, including AM, FM, TV, emergency services, dispatch, pagers, ESMR, cellular, and PCS. It can also be used to protect the equipment at the AM, FM or TV broadcast facilities. Electromagnetic radiation consists of coupled electric and magnetic fields. The electric field produces forces on the charge carriers (i.e., electrons) within the conductor. As soon as an electric field is applied to the surface of an ideal conductor, it induces a current that causes displacement of charge inside the conductor that cancels the applied field inside, at which point the current stops. Similarly, varying magnetic fields generate eddy currents that act to cancel the applied magnetic field. (The conductor does not respond to static magnetic fields unless the conductor is moving relative to the magnetic field.) The result is that electromagnetic radiation is reflected from the surface of the conductor: internal fields stay inside, and external fields stay outside. Several factors serve to limit the shielding capability of real RF shields. One is that, due to the electrical resistance of the conductor, the excited field does not completely cancel the incident field. Also, most conductors exhibit a ferromagnetic response to low-frequency magnetic fields, so that such fields are not fully attenuated by the conductor. Any holes in the shield force current to flow around them, so that fields passing through the holes do not excite opposing electromagnetic fields. These effects reduce the field-reflecting capability of the shield. In the case of high-frequency electromagnetic radiation, the above-mentioned adjustments take a non-negligible amount of time, yet any such radiation energy, as far as it is not reflected, is absorbed by the skin (unless it is extremely thin), so in this case there is no electromagnetic field inside either. This is one aspect of a greater phenomenon called the skin effect. A measure of the depth to which radiation can penetrate the shield is the so-called skin depth. Equipment sometimes requires isolation from external magnetic fields. For static or slowly varying magnetic fields (below about 100 kHz) the Faraday shielding described above is ineffective. In these cases shields made of high magnetic permeability metal alloys can be used, such as sheets of permalloy and mu-metal or with nanocrystalline grain structure ferromagnetic metal coatings. These materials don't block the magnetic field, as with electric shielding, but rather draw the field into themselves, providing a path for the magnetic field lines around the shielded volume. The best shape for magnetic shields is thus a closed container surrounding the shielded volume. The effectiveness of this type of shielding depends on the material's permeability, which generally drops off at both very low magnetic field strengths and at high field strengths where the material becomes saturated. So to achieve low residual fields, magnetic shields often consist of several enclosures one inside the other, each of which successively reduces the field inside it. Because of the above limitations of passive shielding, an alternative used with static or low-frequency fields is active shielding; using a field created by electromagnets to cancel the ambient field within a volume.Solenoids and Helmholtz coils are types of coils that can be used for this purpose. Additionally, superconducting materials can expel magnetic fields via the Meissner effect. Suppose that we have a spherical shell of a (linear and isotropic) diamagnetic material with permeability , with inner radius and outer radius . We then put this object in a constant magnetic field: Since there are no currents in this problem except for possible bound currents on the boundaries of the diamagnetic material, then we can define a magnetic scalar potential that satisfies Laplace's equation: In this particular problem there is azimuthal symmetry so we can write down that the solution to Laplace's equation in spherical coordinates is: After matching the boundary conditions at the boundaries (where is a unit vector that is normal to the surface pointing from side 1 to side 2), then we find that the magnetic field inside the cavity in the spherical shell is: where is an attenuation coefficient that depends on the thickness of the diamagnetic material and the magnetic permeability of the material: This coefficient describes the effectiveness of this material in shielding the external magnetic field from the cavity that it surrounds. Notice that this coefficient appropriately goes to 1 (no shielding) in the limit that . In the limit that this coefficient goes to 0 (perfect shielding), then the attenuation coefficient takes on the simpler form: which shows that the magnetic field decreases like . NOTE: In the above relations, is relative permeability µr, which is the ratio of the permeability of a specific medium to the permeability of free space µ0: where µ0 = 4π × 10−7 N A−2. An inductor, also called a coil, choke, or reactor, is a passive two-terminal electrical component that stores energy in a magnetic field when electric current flows through it. An inductor typically consists of an insulated wire wound into a coil around a core. The wave impedance of an electromagnetic wave is the ratio of the transverse components of the electric and magnetic fields. For a transverse-electric-magnetic (TEM) plane wave traveling through a homogeneous medium, the wave impedance is everywhere equal to the intrinsic impedance of the medium. In particular, for a plane wave travelling through empty space, the wave impedance is equal to the impedance of free space. The symbol Z is used to represent it and it is expressed in units of ohms. The symbol η (eta) may be used instead of Z for wave impedance to avoid confusion with electrical impedance. Coaxial cable, or coax is a type of electrical cable that has an inner conductor surrounded by a tubular insulating layer, surrounded by a tubular conducting shield. Many coaxial cables also have an insulating outer sheath or jacket. The term coaxial comes from the inner conductor and the outer shield sharing a geometric axis. Coaxial cable was invented by English engineer and mathematician Oliver Heaviside, who patented the design in 1880. In physics, the Poynting vector represents the directional energy flux of an electromagnetic field. The SI unit of the Poynting vector is the watt per square metre (W/m2). It is named after its discoverer John Henry Poynting who first derived it in 1884. Oliver Heaviside also discovered it independently. A magnet is a material or object that produces a magnetic field. This magnetic field is invisible but is responsible for the most notable property of a magnet: a force that pulls on other ferromagnetic materials, such as iron, and attracts or repels other magnets. A solenoid is a coil wound into a tightly packed helix. The term was invented in 1823 by André-Marie Ampère to designate a helical coil. Skin effect is the tendency of an alternating electric current (AC) to become distributed within a conductor such that the current density is largest near the surface of the conductor, and decreases with greater depths in the conductor. The electric current flows mainly at the "skin" of the conductor, between the outer surface and a level called the skin depth. The skin effect causes the effective resistance of the conductor to increase at higher frequencies where the skin depth is smaller, thus reducing the effective cross-section of the conductor. The skin effect is due to opposing eddy currents induced by the changing magnetic field resulting from the alternating current. At 60 Hz in copper, the skin depth is about 8.5 mm. At high frequencies the skin depth becomes much smaller. Increased AC resistance due to the skin effect can be mitigated by using specially woven litz wire. Because the interior of a large conductor carries so little of the current, tubular conductors such as pipe can be used to save weight and cost. Eddy currents are loops of electrical current induced within conductors by a changing magnetic field in the conductor according to Faraday's law of induction. Eddy currents flow in closed loops within conductors, in planes perpendicular to the magnetic field. They can be induced within nearby stationary conductors by a time-varying magnetic field created by an AC electromagnet or transformer, for example, or by relative motion between a magnet and a nearby conductor. The magnitude of the current in a given loop is proportional to the strength of the magnetic field, the area of the loop, and the rate of change of flux, and inversely proportional to the resistivity of the material. When graphed, these circular currents within a piece of metal look vaguely like eddies or whirlpools in a liquid. Induction heating is the process of heating an electrically conducting object by electromagnetic induction, through heat generated in the object by eddy currents. An induction heater consists of an electromagnet, and an electronic oscillator that passes a high-frequency alternating current (AC) through the electromagnet. The rapidly alternating magnetic field penetrates the object, generating electric currents inside the conductor called eddy currents. The eddy currents flowing through the resistance of the material heat it by Joule heating. In ferromagnetic materials like iron, heat may also be generated by magnetic hysteresis losses. The frequency of current used depends on the object size, material type, coupling and the penetration depth. In electromagnetism, permeability is the measure of the ability of a material to support the formation of a magnetic field within itself otherwise known as distributed inductance in Transmission Line Theory. Hence, it is the degree of magnetization that a material obtains in response to an applied magnetic field. Magnetic permeability is typically represented by the (italicized) Greek letter µ. The term was coined in September 1885 by Oliver Heaviside. The reciprocal of magnetic permeability is magnetic reluctivity. A Helmholtz coil is a device for producing a region of nearly uniform magnetic field, named after the German physicist Hermann von Helmholtz. It consists of two electromagnets on the same axis. Besides creating magnetic fields, Helmholtz coils are also used in scientific apparatus to cancel external magnetic fields, such as the Earth's magnetic field. In physics, Larmor precession is the precession of the magnetic moment of an object about an external magnetic field. Objects with a magnetic moment also have angular momentum and effective internal electric current proportional to their angular momentum; these include electrons, protons, other fermions, many atomic and nuclear systems, as well as classical macroscopic systems. The external magnetic field exerts a torque on the magnetic moment, A pinch is the compression of an electrically conducting filament by magnetic forces. The conductor is usually a plasma, but could also be a solid or liquid metal. Pinches were the first type of device used for controlled nuclear fusion. The word electricity refers generally to the movement of electrons through a conductor in the presence of potential and an electric field. The speed of this flow has multiple meanings. In everyday electrical and electronic devices, the signals or energy travel as electromagnetic waves typically on the order of 50%–99% of the speed of light, while the electrons themselves move (drift) much more slowly. In a conductor carrying alternating current, if currents are flowing through one or more other nearby conductors, such as within a closely wound coil of wire, the distribution of current within the first conductor will be constrained to smaller regions. The resulting current crowding is termed the proximity effect. This crowding gives an increase in the effective resistance of the circuit, which increases with frequency. A microwave cavity or radio frequency (RF) cavity is a special type of resonator, consisting of a closed metal structure that confines electromagnetic fields in the microwave region of the spectrum. The structure is either hollow or filled with dielectric material. The microwaves bounce back and forth between the walls of the cavity. At the cavity's resonant frequencies they reinforce to form standing waves in the cavity. Therefore, the cavity functions similarly to an organ pipe or sound box in a musical instrument, oscillating preferentially at a series of frequencies, its resonant frequencies. Thus it can act as a bandpass filter, allowing microwaves of a particular frequency to pass while blocking microwaves at nearby frequencies. The article Ferromagnetic material properties is intended to contain a glossary of terms used to describe ferromagnetic materials, and magnetic cores. Magnetic levitation, maglev, or magnetic suspension is a method by which an object is suspended with no support other than magnetic fields. Magnetic force is used to counteract the effects of the gravitational acceleration and any other accelerations.
By Andrew Fraknoi and Sherwood Harrington JUST TWO DECADES ago, black holes were an interesting footnote to our astronomical theories that few non-specialists had heard about. Today, black holes have “arrived” – one hears about them in Hollywood thrillers, in cartoon strips, and more and more on the science pages of your local newspaper. What exactly are these intriguing cosmic objects and why have they so captured the imagination of astronomers and the public? A black hole is what remains after the death of a very massive star. Although stars seem reasonably permanent on human time scales, we know that over the eons all stars will run out of fuel and eventually die. When smaller stars like our own Sun burn out, they simply shrink under there own weight until they become so compact they cannot be compressed any further. (This will not happen to the Sun for billions of years, so there is no reason to add a rider to your home owners policy at this time!) When the largest (most massive) stars have no more fuel left, they have a much more dramatic demise in store for them. These stars have so much material that they just cannot support themselves once their nuclear fires go out. Current theories predict that nothing can stop the collapse of these huge stars. Once they begin to die, whatever remains of them will collapse FOREVER. As the collapsing star falls in on itself, pull of gravity near its surface will increase. Eventually its pull will become so great that nothing – not even light – can escape, the star will look BLACK to an outside observer. And anything you throw into it will never return. Hence astronomers have dubbed these collapsed stellar corpses “black holes.” Alert readers will quickly note that this expanation of black holes does not bode well for finding one. How do we detect something that cannot give off any light (or other form of radiation)? You might suggest that we can spot a black hole as it blocks the light of stars that happens to lie behind it. That might work if the black hole hovered near the Earth, but for any black holes that are a respectful distance away in space, the part of the sky it would cover would be so small as to be invisible. To make matters worse, the sort of black hole that forms from a single collapsing star would be only 10 or 20 miles across – totally insignificant in size compared to most objects astronomers study and much too small to help a distant black hole hunter on Earth. The size of a black hole, by the way, is not the size of the collapsing star remnant. The stuff of the former star does continue to collapse forever inside the black hole. What gives the hole its “size” is a special zone around the star’s collapsing core, called the “event horizon.” If you are outside this zone, and you have a powerfull rocket, you still have a chance to get away. Once you passed inside this zone, the gravitational pull of the collapsing stuff is so great, nothing you can do can help you from being pulled inexorably to your doom. The name “event horizon” comes from the fact that once objects are inside the zone, events that happen to them can no longer be communicated to the outside world. It is as if a tight “horizon” has been wrapped around the star. How then could we detect these bizzare objects and verify the strange things predicted about them? It turns out that far away from a black hole the only way to detect it is to “watch it eating.” If a black hole forms in a single star system, there is very little material close to the collapsed remnant for its enormous gravity to pull in. But we believe that more than half of the stars form in double, triple or multiple systems. When two stars orbit each other in proximity, and one becomes a black hole, the other one may have some difficult times ahead. Under the right circumstances, material from the outer regions of the normal star will begin to flow toward its black hole companion. As particles of this stolen material are pulled into a twisting, whirling stream around the black hole’s event horizon, they are heated to enormous temperatures. They quickly become so hot that they glow – not just with visable light, but with far more energetic X-rays. (Of course, all this can be seen only above the event horizon; once the material falls into the horizon, we have no way of ever seeing it again.) Astronomers began searching in the 1970s for the tell-tale X-rays that indicate that a black hole is consuming a part of its neighbor star. Since cosmic X-rays are blocked by the Earth’s atmosphere, these observations became possible only when we could launch sensitive X-ray telescopes into space. But in the last decade and a half, at least three excellent candidates for a “feeding” black hole have been identified. Probably the best-known case is called Cygnus X-1, a system in the constellation of Cygnus the swan, in which we see a normal star that appears to be going around a region of space with nothing visable in it. Smack dab from the middle of that region, we see just the sort of X-rays that reveal the stream of material being sucked into the hole. While this sort of indirect evidence is not quite as satisfying as seeing a black hole “up close,” for now (and perhaps fortunately) it will have to do. What is intriguing astronomers these days is the posibility that enormous black holes may have formed in crowded regions of space. These may not just eat part of a companion star, but may actually consume many of their neighbor stars eventually. What we would then have is an even larger black hole, able to eat even more of the material in its immediate neighborhood. In the most populated areas of a galaxy – for example, its center – black holes may ultimately form that contain the material of a million or billion stars. In recent years, astronomers have begun to see tantalizing evidence from the center of our own galaxy and from violent galaxies in the distant reaches of space indicating that such supermassive black holes may be more common than we ever imagined. If this evidence is further confirmed, we may find that the strange black hole plays an important role not only in the death of a few stars but even in the way entire galaxies of stars evolve.
A midden is defined as a 'dung hill or refuse heap.' More specifically it may refer to discarded items in archaeological sites. The Baja Peninsula and adjacent islands within the Sea of Cortez hold plentiful evidence of early human habitation, including shell middens. Some accumulations of shells may be 20 feet deep. Shellfish were gathered, taken ashore, opened and the shells discarded. Shellfish provided food but also had other uses. If pearls were found they could be fluted to attach to necklaces. Shells could serve as platters, scrapers, or other tools. In one study, 73 species of mollusks were described from shell middens. Eating preferences of the collectors and the relative abundance of shellfish determined the composition of the piles. Numerous archaeological Baja sites containing signs of habitation, rock rings, hearths, stone implements or shell middens exist on the islands of the Sea of Cortez and on the peninsula. On the Isla Espíritu Santo/Isla Partida Sur complex alone, more than 125 sites have been discovered. Radiocarbon dating reflects a time frame of 9000 B.C. to 1460 A.D. for a number of samples. There are reports of shells aged by radiocarbon methods to approximately 40,000 years before present, but it is thought that the shells in this context were ancient long before humans picked them up and transported them. It can be difficult to distinguish natural shell deposits from middens. One difference is that people selectively gather shellfish, so the species represented are edible and not randomly intermingled with other shells. Of course, waves sort shells and pebbles based on size and weight, so this must be factored into the analysis. Middens containing a high percentage of pearl oysters are from the period after Europeans arrived in the 16th century at what Hernan Cortés named Islas de Perlas (the Pearl Islands). By 1880, the shells became more valuable than the pearls. European markets prized the mother-of-pearl for buttons, combs, and knife handles. If you are fortunate to visit a shell midden or other archaeological Baja sites while on a Baja California cruise or other Mexican vacation, tread lightly. Do not disturb or remove this fascinating evidence of human history. These sites are protected and have research value to provide more information about the remarkable culture of former inhabitants. Join us for updates, insider reports & special offers.
They're big, fuzzy, buzzy... and a little bit clumsy. One's an important and prolific pollinator, while the other does damage that outweighs any pollination benefits. CARPENTER BEES vs. BUMBLEBEES Carpenter bee appearance: Carpenter bees have a bare, shiny abdomen that's all black. They measure about 1 inch long. The thorax on some carpenter bee species is yellow; other species have a white, black, brown or blue thorax. A bumblebee's head, thorax and abdomen are all fuzzy. The thorax has a thin yellow band, and the abdomen is yellow and black. Bumblebees can range from 3/4 inch to 1.5 inch in length. Carpenter bee nests: Carpenter bees make their nests in wood, drilling a hole and then turning 90 degrees to excavate a tunnel in which to lay eggs. Bumblebees build their nests close to the ground, in places like compost heaps, wood or leaf piles, or abandoned rodent holes. Carpenter bee colony: Carpenter bees are solitary bees and do not form colonies. They live in small nests constructed by one female who bores into wood to lay her eggs in several small cells. Bumblebees are social insects that live in colonies of 50-400 bees. There is one queen, and the other bumblebees gather food to serve her and care for the developing larvae. Carpenter bee habits: Carpenter bees hover around wood siding, decks, eaves and fences to excavate a nest or feed the larvae inside. Bumblebees visit flowers to collect pollen and nectar. They can make flowers release pollen through the fast vibration of their wings. Carpenter bee aggression: Male carpenter bees are known for "buzzing" the heads of humans in an aggressive manner, but they cannot sting. Female carpenter bees can sting if the nest is threatened. A female bumblebee can sting, and will do so repeatedly without losing its stinger. Bumblebees are typically not aggressive unless the nest is threatened. For many, the costly structural damage caused by carpenter bees outweighs any beneficial aspect of their presence, which is why we created the TrapStik® for Carpenter Bees. Just hang the TrapStik® horizontally near untreated wood or wherever you notice carpenter bees hovering, and you can catch them before the damage is done. Spring is the best time to catch carpenter bees before they mate, but TrapStik® can be used throughout the summer and early fall to get rid of carpenter bees.
What is Cholesterol? Cholesterol is a waxy substance that is produced naturally in all parts of the body and your body needs cholesterol to function normally. Cholesterol is there in cell walls or membranes, including the brain, nerves, muscle, skin, liver, intestines, the heart, and everywhere else in the body. Your body uses cholesterol to produce many hormones, vitamin D, and the bile acids that help to digest fat. It takes only a small amount of cholesterol in the blood to meet these needs. What Happens if You Have Too Much Cholesterol? However, if you have too much cholesterol in your bloodstream, it can lead to atherosclerosis, a condition in which fat and cholesterol are deposited in the walls of the arteries in many parts of the body, including the coronary arteries feeding the heart. In time, narrowing of the coronary arteries by atherosclerosis can produce the signs and symptoms of heart disease, including angina and heart attack. Most people are actually unaware of the relationship between cholesterol levels and health. Others simply find it hard to get rid of bad habits that lead to high cholesterol or they ignore the importance of lower cholesterol. Most people continue with their unhealthy cholesterol levels until such time that they would have to bear the consequences. Actually, cholesterol is not unimportant. But it is essential that we maintain lower cholesterol level since high amounts could lead to heart diseases. Types of Cholesterol? We need cholesterol. The body needs cholesterol for its various processes. But we have to know that there are two types of cholesterol in the bloodstream-the high density lipoprotein (HDL) and low density lipoprotein (LDL). HDL and LDL perform opposing functions in the body. HDL cholesterol decreases the risk for heart diseases by cleaning the fat and cholesterol deposits in the arterial walls. In contrast, LDL cholesterol deposits fats in the walls of the arteries. Hence, when we are told that we have to lower cholesterol, it refers to LDL cholesterol. Certainly, problems start to occur when levels of cholesterol tipped in favor of LDL cholesterol. How does LDL cholesterol increase then? Diets high in saturated fats are the main sources of LDL cholesterol. Saturated fats can be obtained from fatty foods such as beef or lamb. Fast foods are also high in saturated fats. Knowing that the diet is the main source of LDL cholesterol, it is important that we keep our food intakes in check. Avoiding fatty food sources could be one step in the goal of maintaining lower cholesterol. Having high cholesterol will lead to high blood pressure on a regular basis. This can lead to serious problems like heart attack, stroke, heart failure or kidney disease. The medical name for persistently high blood pressure is called ‘hypertension’. Natural Remedies For High Cholesterol Hawthorn is a shrub that has leaves, berries, and flowers. Hawthorn has been used for centuries to aid in heart disorders . Hawthorn has been used to aid in congestive heart failure, high cholesterol, and high blood pressure. In fact, research has shown that Hawthorn has antioxidant properties that are helpful to the body on cellular level. Hawthorn is also beneficial to the mitochondria of the cell. The mitochondria is the driving force of the heart’s cells. This herb has anti-plaque properties. This is because Hawthorn prevents platelets from sticking together. Research has shown that Hawthorn prevents cholesterol from being made in the liver and it prevents absorption in the intestine. Hawthorn is usually standardized to its content of flavonoids (2.2 percent) or oligomeric procyanidins (18.75 percent). The daily dosage as reflected in the literature ranges from 160 to 1,800 mg, but most physicians believe there is greater therapeutic effectiveness with higher dosages (600 to 1,800 mg in two or three divided doses daily). Side effects associated with Hawthorn therapy are digestive discomfort and dizziness. If you take high blood pressure medications or medications that decrease your heart rate, be careful because Hawthorn does decrease blood pressure and it decreases your heart rate. Omega 3 Fatty Acids The benefits of Omega 3 fatty acids are well known. In fact, the American Heart Association recommends people who have heart disease should take EPA and DHA. These are Omega 3 fatty acids. In fact, did you know there are two prescriptions of Omega 3 available in the United States? They are named Lovaza and Vascepa. Omega-3 fatty acids are not made in the body, therefore; these fatty acids must be consumed. Omega-3 protects the heart by: - Reducing triglycerides - Reduces arterial plaques - Anti-Inflammatory effects - Lowers systolic and diastolic blood pressure Garlic has been shown to help reduce cholesterol. In the clinical trials, aged garlic performed the best. Garlic is also anti-inflammatory and anti-bacterial. Click Here to Learn More About How Kyolic Aged Garlic is used to Reduce Cholesterol Levels. High cholesterol levels come from inflammation. Sure there are medicines to treat high cholesterol. However, the meds have terrible side effects for some people. Listed above, are some natural options for high cholesterol.
Science KS2: Life cycle of an ant A leafcutter ant colony from Trinidad has been rescued and re-housed in a giant man-made nest in the UK, allowing an in-depth study into their normally hidden world. It’s the first time a man-made colony has been built on this scale and Professor Adam Hart gives four primary school scientists a tour. The young scientists are not able to see the queen as she is hidden deep within the nest. But, in the lab, Adam is able to show them a similar queen from another leafcutter colony. They learn that the queen is much larger than all the other ants, with the smaller ants that surround her tending to her every need. The life cycle of ants is described; the queen lays the eggs which hatch into larvae and then change to become the ants in the colony. When the eggs are laid they are all the same, but what and how much they are fed results in different kinds of ants, such as soldier ants and minima. The ant colony is very clever; if it comes under attack it produces more soldier ants, and if they need more leaves they will grow more ants to become foragers. This clip is from the series Life on Planet Ant. Students could role-play how an ant colony adapts to outside forces. One student could be selected as the ‘queen’ and two others as worker ants. Everyone else could stand in a ‘holding zone’. The ‘queen’ could tap students in the holding zone on the shoulder and they would then become ‘eggs’. Once the eggs are created, the worker ants should hand out different coloured bands, depending on whether the ant in the egg will become a soldier or a worker ant. Once a few ants have been ‘born’, a caller could describe an outside force which is affecting the colony. For example, ‘another colony is attacking’. The workers should then hand out more ‘soldier’ bands. Outside forces should lead to killing a number of ants at once so that students can return to the holding zone, begin their life again as ‘eggs’ and continue the flow of the game. Examples include humans stepping on the colony, or doing battle with other ants. Students could draw the different stages of the ant life cycle in the correct order. This clip will be relevant for teaching Science/Biology at KS2 in England, Wales and Northern Ireland and 2nd Level in Scotland.
Human Gut Bacteria Took Root Before We Were Human Microbes in two bacterial families likely colonized the guts of our early ancestors around 15 million years ago. The relationship between humans and the bacteria in our guts extends far back into the past - to the time before modern humans even existed, a new study finds. Microbes in two bacterial families - Bacteroidaceae and Bifidobacteriaceae, which are present in humans and African apes - likely colonized the guts of a shared ancestor of both groups around 15 million years ago, the researchers discovered. Since then, the bacteria have inhabited the digestive systems of humans and apes for thousands of generations. [Body Bugs: 5 Surprising Facts About Your Microbiome] The researchers' genetic data also tell a story of parallel evolution - in the microbes, and in the primate hosts they inhabited. "Just like we share a common ancestor with chimpanzees about 6 million years ago, a lot of our gut bacteria share a common ancestor with chimpanzee gut bacteria, which diverged around the same time," said study co-author Andrew Moeller, a postdoctoral researcher at the University of California at Berkeley. "And the same is true for human and gorilla gut bacteria. We share a common ancestor maybe about 15 million years ago, and we found that some of our gut bacteria shared common ancestry about the same time," Moeller said in a statement. Recent research has shown that humans' complex communities of gut microbes may influence our immune systems, and may be associated with certain moods and behaviors. This new study provides the first evidence of when in our evolutionary history those microbes may have colonized us, the researchers said. Previous findings enabled the researchers to identify an animal species purely from the groups of microbes in their gut, study co-author Howard Ochman, a professor of integrative biology at the University of Texas at Austin, said in a statement. WATCH VIDEO: Where Are the Most Bacteria-Filled Places in Your Life? "If you were to give me a sample that came from chimpanzees, I could easily distinguish them from those that had come from human populations," Ochman said. However, earlier analysis of gut microbes in humans and apes could only compare the overall diversity of their bacterial communities. In the new study, researchers identified individual types of bacteria. By comparing bacteria between humans and apes - chimpanzees, bonobos and two subspecies of gorillas - the researchers trace their lineages of bacteria through time. Numerous studies have shown that many factors affect the bacterial diversity in the human gut, including people's diet, geography and medical history. The new findings suggest that evolution may have played a more important role in establishing some of these microbial partners than previously thought. "Just like our genes are passed down every generation, some of our gut bacteria have been passed down in an unbroken line of descent for a really long time," Moeller said. The findings were published online today (July 21) in the journal Science. More from LiveScience: Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Original article on Live Science.
Tree of life brought to scale by Yale scientists Examples of biological scaling are everywhere. The paw of a mouse is smaller than the human hand. Our own organs and limbs typically scale with our body size as we develop and grow. Scientists at Yale have shown that this same phenomenon exists at the subcellular level in the smallest bacteria, where the size of the nucleoid—the membrane-less region containing the cell's genes—also scales with the size of a bacterial cell. Absent a membrane or "envelope" associated with biological scaling in other life forms, scientists have remained unclear about the presence of scaling in bacteria for over a century. Published today in Cell, researchers at the Yale Microbial Sciences Institute have concluded that this scaling effect occurs across different species of bacteria at the single-cell level, with the nucleoid growing at the same rate as the cell independently of changes in DNA content. Led by Christine Jacobs-Wagner, William H. Fleming, MD Professor of Molecular, Cellular, and Developmental Biology, the research shows that the scaling trait was likely present billions of years ago, predating the development of intracellular membrane structures. Alongside first authors William Gray and Sander Govers, graduate student and postdoctoral associate in the Jacobs-Wagner Lab, the work establishes for the first time that biological scaling exists across all three taxonomic domains of life: the Archaea, Eukarya, and now the Bacteria. The findings identify general organizational principles and biophysical features of bacterial cells, expected to inform scientific advances looking into the constraints of how a cell is built.
According to American Weighing Scales, different types of platform scales are composed of different parts and use different terminology. However, there are two main types: spring scales and balance beam scales. The main function of both mechanisms is to provide an accurate weight measurement of an object. American Weighing Scales describes spring scales as containing a spring mechanism that stretches in direct proportion to the weight of an object placed on the bottom end, commonly called the "load receiving end." These types of scales are also known as "strain gauge scales." Modern versions use electronics to replace the actual spring or gauge, and are most likely found in laboratories because of their greater accuracy. Balance beam scales are traditional, more widely recognized scales. A balance beam scale uses a fulcrum and lever. The lever balances across the fulcrum with the object to be weighed at one end and a counterweight at the other. Alternatively, the lever can be suspended from the fulcrum with the pans or plates for the object and counterweight suspended from each end. Once the lever is horizontal to the fulcrum, the unknown weight and the counterweight are equal, and the weight of an object can be determined.
|Stinging nettle (Urtica dioica)| Urtica is a genus of flowering plants in the family Urticaceae. Many species have stinging hairs and may be called nettles or stinging nettles, although the latter name applies particularly to Urtica dioica. Urtica species are food for the caterpillars of numerous Lepidoptera (butterflies and moths), such as the tortrix moth Syricoris lacunana and several Nymphalidae, such as Vanessa atalanta, one of the red admiral butterflies. |This section does not cite any sources. (August 2013) (Learn how and when to remove this template message)| Urtica species grow as annuals or perennial herbaceous plants, rarely shrubs. They can reach, depending on the type, location and nutrient status, a height of 10–300 cm. The perennial species have underground rhizomes. The green parts have stinging hairs. Their often quadrangular stems are unbranched or branched, erect, ascending or spreading. Most leaves and stalks are arranged across opposite sides of the stem. The leaf blades are elliptic, lanceolate, ovate or circular. The leaf blades usually have three to five, rarely up to seven veins. The leaf margin is usually serrate to more or less coarsely toothed. The often-lasting bracts are free or fused to each other. The cystoliths are extended to more or less rounded. Species in the genus Urtica, and their primary natural ranges, include: - Urtica andicola Webb - Urtica angustifolia Fisch. ex Hornem. China, Japan, Korea - Urtica ardens China - Urtica aspera Petrie South Island, New Zealand - Urtica atrichocaulis Himalaya, southwestern China - Urtica atrovirens western Mediterranean region - Urtica australis Hook.f. South Island, New Zealand and surrounding subantarctic islands - Urtica cannabina L., Western Asia from Siberia to Iran - Urtica chamaedryoides (heartleaf nettle), southeastern North America - Urtica dioica L. 1753 (stinging nettle or bull nettle), Europe, Asia, North America - Urtica dioica subsp. galeopsifolia Wierzb. ex Opiz (fen nettle or stingless nettle), Europe. (Sometimes treated as a separate species Urtica galeopsifolia.) - Urtica dubia (large-leaved nettle), Canada - Urtica ferox G.Forst. (ongaonga or tree nettle), New Zealand - Urtica fissa China - Urtica gracilenta (mountain nettle), Arizona, New Mexico, west Texas, northern Mexico - Urtica hyperborea Himalaya from Pakistan to Bhutan, Mongolia and Tibet, high altitudes - Urtica incisa Poir (scrub nettle), Australia, New Zealand - Urtica kioviensis Rogow. eastern Europe - Urtica laetivirens Maxim. Japan, Northeast China - Urtica linearifolia' (Hook.f.) Cockayne (creeping or swamp nettle), New Zealand - Urtica mairei Himalaya, southwestern China, northeastern India, Myanmar - Urtica massaica Africa - Urtica membranacea Poir. ex Savigny Mediterranean region, Azores - Urtica morifolia Poir. Canary Islands (endemic) - Urtica parviflora Himalaya (lower altitudes) - Urtica peruviana D.Getltman Perú - Urtica pseudomagellanica D.Geltman Bolivia - Urtica pilulifera (Roman nettle), southern Europe - Urtica platyphylla Wedd. China, Japan - Urtica procera Mühlenberg (tall nettle), North America - Urtica pubescens Ledeb. Southwestern Russia east to central Asia - Urtica rupestris Sicily (endemic) - Urtica sondenii (Simmons) Avrorin ex Geltman northeastern Europe, northern Asia - Urtica taiwaniana Taiwan - Urtica thunbergiana Japan, Taiwan - Urtica triangularisa - Urtica urens L. (small nettle or annual nettle), Europe, North America Thanks to the stinging hairs, Urtica species are rarely eaten by herbivores, so they provide long-term shelter for insects, such as aphids, caterpillars, and moths. The insects, in turn, provide food for small birds, such as tits. There is historical evidence of use of Urtica species (or nettles in general) being used in medicine, folk remedies, cooking and fiber production. Urtica dioica is the main species used for these purposes, but a fair amount also refers to the use of Urtica urens, the small nettle. Arthritic joints were traditionally treated by whipping the joint with a branch of stinging nettles, a process called urtication. Nettles can also be used to make a herbal tea known as nettle tea. Use as Food See also nettle soup Nettles have many folklore traditions associated with them. The folklore mainly relates to the stinging nettle (Urtica dioica), but the similar non-stinging Lamium may be involved in some traditions. Myths about health and wealth - Nettles in a pocket will keep a person safe from lightning and bestow courage. - Nettles kept in a room will protect anyone inside. - Nettles are reputed to enhance fertility in men, and fever could be dispelled by plucking a nettle up by its roots while reciting the names of the sick man and his family. Milarepa, the great Tibetan ascetic and saint, was reputed to have survived his decades of solitary meditation by subsisting on nothing but nettles; his hair and skin turned green and he lived to the age of 83. An old Scots rhyme about the nettle: - "Gin ye be for lang kail coo the nettle, stoo the nettle - Gin ye be for lang kail coo the nettle early - Coo it laich, coo it sune, coo it in the month o' June - Stoo it ere it's in the bloom, coo the nettle early - Coo it by the auld wa's, coo it where the sun ne'er fa's - Stoo it when the day daws, coo the nettle early." - (Old Wives Lore for Gardeners, M & B Boland) Coo, cow, and stoo are all Scottish for cut back or crop (although, curiously, another meaning of "stoo" is to throb or ache), while "laich" means short or low to the ground. Given the repetition of "early," presumably this is advice to harvest nettles first thing in the morning and to cut them back hard [which seems to contradict the advice of the Royal Horticultural Society]. A well-known English rhyme about the stinging nettle is: - Tender-handed, stroke a nettle, - And it stings you for your pains. - Grasp it like a man of mettle, - And it soft as silk remains. - Otto Wilhelm Thomé Flora von Deutschland, Österreich und der Schweiz 1885, Gera, Germany - "The Plant List: Urtica". Royal Botanic Gardens, Kew and Missouri Botanic Garden. Retrieved 6 September 2016. - Butterflies of the nettle patch - "Moths of the Nettle Patch". Nettles.org. Retrieved March 1, 2015. - Nettles and Wildlife by Prof. Chris Baines - Gulsel M. Kavalali (2003), Gulsel M. Kavalali, ed., Urtica: Therapeutic and Nutritional Aspects of Stinging Nettles, Taylor and Francis, p. 13, ISBN 0-415-30833-X - "Search for dishes using Kopriva". gotvach.bg. Retrieved 19 February 2016. - Gtsaṅ-smyon He-ru-ka, Andrew Quintman, Donald S. Lopez, Jr. (2003), The Life of Milarepa, Penguin, p. 139, ISBN 0-14-310622-8 - Caribbean folktales - Dictionary of the Scots Language (online)
In theory, the counterintuitive workings of quantum mechanics can guarantee that digital communications are utterly immune to prying eyes. That theory has advanced quickly, but the practice is now catching up, thanks to two developments by one of the field’s pioneers. University of Vienna physicist Anton Zeilinger and his team realized the first teleportation of photons in 1997. Not to be confused with the stock-in-trade of Star Trek’s Montgomery Scott, teleportation is the instantaneous transfer of the properties of one particle to another distant one; it’s key to perhaps the most unassailable version of quantum communications. In November, Zeilinger and his team reported that they’d taken the process two important steps further. First, they teleported not just a photon’s usual properties but also its strangest one: entanglement. What’s more, they did it over a record distance of 143 kilometers, linking the Canary Islands of La Palma and Tenerife. That distance is particularly significant, because it’s nearly as far as the boundary of low Earth orbit. Second, they pulled off a similar feat—although over a much shorter distance—using twisted light, the kind featuring photons having a property called orbital angular momentum. Photonics experts hope orbital angular momentum could hugely increase the bandwidth of optical telecommunications networks. Entanglement is a quantum phenomenon. When a pair of particles, such as photons, are created in a single physical process or interact with each other in a particular way, they become entangled—that is, they start behaving like a single particle, even when they become separated by any distance. Teleportation of entanglement, also known as entanglement swapping, makes use of another curious phenomenon: It’s also possible to entangle two photons by performing a joint measurement on them, known as a Bell-state measurement. Once these photons are linked, switching the polarization of one of them—say, from up to down—causes an instantaneous switch of the other photon’s polarization, from down to up. Here’s how the entanglement swap works: Assume you have two pairs of entangled photons, 0 and 1 in the receiving station and 2 and 3 in the transmitting station. Both entangled pairs are completely unaware of each other; in other words, no physical link exists between them. Now, assume you send photon 3 from the transmitter to the receiver and perform a Bell-state measurement simultaneously on photon 3 and on photon 1. As a result, 3 and 1 become entangled. But surprisingly, photon 2, which stayed home, is now entangled with photon 0, at the receiver. The entanglement between the two pairs has been swapped, and a quantum communication channel has been established between photons 0 and 2, although they’ve never been formally introduced [see illustration, “Entanglement Swapping”]. Entanglement swapping will be an important component of future secure quantum links with satellites, says Thomas Scheidl, a member of Zeilinger’s research group. The team is working with a group at the University of Science and Technology of China on a satellite project led by Zeilinger’s former Ph.D. student Jian Wei Pan. Next year, when the Chinese Academy of Sciences launches its Quantum Science Satellite—which will have an onboard source of entangled photons—the satellite and ground stations in Europe and China will form the first space–Earth quantum network. They will implement a quantum-key relay protocol, securely linking the ground stations in Europe and China, says Scheidl. The Quantum Science Satellite will be launched into a low Earth orbit and will communicate with one ground station at a time. “The satellite flies over a ground station in Europe and establishes a quantum link to the ground station, and you generate a key between the satellite and the ground station in Europe. Then, some hours later, the satellite will pass a ground station in China and establish a second quantum link and secure key with a ground station in China,” Scheidl explains. “The satellite then has both keys available, and you can combine both keys into one key,” he adds. “Then you send, via a classical channel, the key combination to both of the ground stations. This you can do publicly because no one can learn anything from this combined key. Because one ground station has an individual key, it can undo this combined key and learn about the key of the other ground station.” The future quantum Internet will need a network of satellites and ground stations, similar to that of the Global Positioning System, in order to exchange quantum keys instantaneously. Kilometer distances for transmitting twisted light—light having an orbital angular momentum, or a wave front that resembles fusilli pasta—were proved possible a year ago by the Vienna team. “Last year was a necessary step, and it was successful,” says Mario Krenn, a member of Zeilinger’s research group. They knew that entangling twisted photons was the next step that could increase the bandwidth of quantum-key generation. And that’s just what Krenn and his colleagues have done, entangling twisted photons and sending them 3 kilometers from each other: “We were able to show that on the single photon level, each photon can keep information in the form of orbital angular momentum over a large distance and can be entangled even after 3 kilometers.” The control of twisted quantum states is much more complicated than the control of polarization states, but the possibility of being able to entangle photons on multiple levels is worth the effort, says Krenn. Photons can exist only in two polarization states or levels, up and down. But the number of orbital angular momentum states is, in theory, unlimited, explains Krenn. “In the lab, we have shown that we can create a 100-dimensional entanglement—up to a hundred different levels of the photons can be entangled.” And that will mean a truly high-bandwidth, uncrackable global network.
Search articles from 1992 to the present. This article was published originally on 4/23/2008 The iris borer is the most destructive insect pest of iris. It directly damages leaves and rhizomes and introduces the bacterium that causes a foul-smelling soft rot. Damage first appears in May or June when the iris borer caterpillar feeds inside the foliage and causes dark-streaked, watery areas or ragged edges on the developing leaves. The caterpillars move downward in the plant and by July or early August may have caused extensive destruction inside the rhizomes. Rhizomes may be completely hollow from borer feeding or decayed by the soft rot bacteria. Iris borer caterpillars are usually found inside the rhizome at the time irises are dug and divided. Some may have tunneled a short distance away to pupate in the soil. The caterpillars are smooth and plump and resemble cutworms. They vary from light to deep pink with a brown head and are 1.5 to 2 inches in length. Iris borer moths emerge from the pupae in the soil by late summer and lay eggs on the dying iris leaves and surface debris. The eggs overwinter and hatch the following April or early May. There is only one generation per year. Iris borers and borer-damaged rhizomes uncovered during dividing should be discarded. If you find empty damaged rhizomes, search the nearby soil for shiny, dark chestnut brown pupae and discard them, also. There is no benefit to treating the soil when replanting the irises. Control: Sanitation is an important part of managing iris borer. Discard all damaged and infested rhizomes. During the fall or very early spring remove and discard all old plant material and debris from the iris bed (to eliminate eggs before they hatch). If iris borer has been a persistent problem you might consider spraying the foliage with insecticide in the spring when leaves are 5 to 6 inches in length. Insecticide sprays are only effective if used against small larvae before they tunnel into the plants. Use a general garden insecticide such as acephate or spinosad according to label directions. Beneficial, insect-attacking nematodes may be a naturally-occurring biological control. They can also be purchased and added to the soil, tough the effectiveness of commercially available nematodes has not been documented in Iowa. Nematodes are perishable and need a consistently moist environment to survive. Year of Publication: IC-499( 7) -- April 23, 2008
The purpose of a network is to gain access to a resource available on another system, data on a disk or to be able to use a printer. For accessing data on the disk of another system on the network, actions have to be taken on both systems: System allowing other systems to access data on its disk = Network-Server: For security reasons, data is NOT accessible, unless you give the permission to others to access the data. This process is called "Sharing". |You can give permission to access the complete| harddisk or just to a directory. First, make the decision, what part of your disk is to be accessible ("is Shared") by selecting (clicking) it. |then, call up the "Context-Menu" by clicking| the RIGHT-Mouse-button, and select the |Give the permission by | selecting the radio-button: By default, the ShareName will be the name of the selected item (in our case: the directory), but you can enter a new name, under which it will be known on the network). You also define the whether others are just allowed to read the data on your disk or whether they have "Full" access (allowing them to read, write, delete files). |As an indication, that a| disk/directory is now accessible via the network, is is displayed with the "Holding hand" icon: System accessing data on the disk of another system = Network Client: A Windows95 (and NT4) systems usually gains access to network resource via the "Network Neighborhood" : |First select a system on the network.| To view the available resources, "Open it" by double-Clicking on it. It shows now the list of Shared resources (disks, directories and/or To view (and to work with) the data, open the network-resource To gain permanently the access, the network-resource is now defined as a "Virtual Disk": It is listed like a hardisk in your system, it can be used like the harddisk in your system, but the disk is simulated, in reality the data is accessed via the network and resides on the disk of another system. This process of defining the simulated drive (the "Network drive") is called: "Mapping a Network drive" |First, when selecting a system on the Network,| you CANNOT map a full system (the menu-item in the Context-menu is grayed out). You can ONLY map a Disk-resource, so first display the available resource by "opening " the network-computer: double-click on it) |select the Network disk-resource and call up the| Context-menu (Right-mouse-click) and select: "Map Network Drive". (On a Microsoft Network, you can ONLY map to the Shared-Resource itself, NOT to any directory defined inside the resource. That is only possible on mapping on Novell-Servers). |Select the drive-character, which| should be used for this If you like the system to re-establish this mapping automatically on restarting the system, make sure to have the Checkmark on "Reconnect at Login" The path is listed as : "\\P120_home\cserve". This method of naming is called UNC: Universial Naming Convention. The first part, staring with the 2 back-slashes, is the name of the computer on the network, followed by the name of the resource (=shared disk-area). The Network drive can now be accessed like the local harddisk, but it is marked with a special icon: |Disk of the system "Sharing" the data:||"Network drive" on the system| accessing the data:
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you. Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals. Teacher Resources by Grade |1st - 2nd||3rd - 4th| |5th - 6th||7th - 8th| |9th - 10th||11th - 12th| Wartime Poetry: Working With Similes |Grades||3 – 5| |Lesson Plan Type||Standard Lesson| |Estimated Time||Four 45-minute sessions| London, United Kingdom MATERIALS AND TECHNOLOGY - Rose Blanche by Roberto Innocenti (Harcourt, 1996) - A World War II Anthology selected by Wendy Body (Pearson Schools, 1999) - War Boy by Michael Foreman (Trafalgar Square, 2000) - What They Don't Tell You About the Blitz by Bob Fowke (Hodder & Stoughton Children's Division, 2002) - Plain paper and paints/colored pencils - Thesauri (at least one for every two students) - Photograph of evacuated children - Evacuated child giving a first-hand account of the experience - Wartime Evacuation of Children: Making Labels |1.||This series of lessons can start or enhance a unit of study about World War II. Students do not need to have any prior knowledge about World War II to participate in the lessons. |2.||Read background material on the subject matter to prepare for the lesson. To summarize the events related to the evacuation of British children during World War II: During September 1939 nearly 800,000 children were evacuated from the cities to the countryside as the British government feared German bomb attacks and wanted to keep children, pregnant mothers, and elderly or disabled citizens safe. Children and parents often did not know to what location they would be evacuated, and were usually just asked to report to their school with a suitcase packed. When arriving in the country, children would be assigned to the homes of members of the local community. Although some children were treated badly (e.g., beaten, overworked, or underfed), many children enjoyed their evacuation and returned to the countryside to live after the war was over. |3.||Consult other informational books about World War II to help you further prepare for the lesson, such as: |4.||Create a transparency of the photograph of evacuated children for use with an overhead projector or enlarge the photograph to use during whole class work. Also, make handout-size copies of the photograph to use during individual student work. |5.||Make one copy of the Shared Poem Structure sheet and the Independent Poem Structure sheet for each student to use during Session 3.|
Using nanorobots to build circuits is so last year’s fantasy. The latest technology of tomorrow uses viruses to construct everything from transistors to tiny batteries to solar cells. Researchers at MIT published a paper in the Proceedings of the National Academy of Sciences this week describing how they’ve successfully created tiny batteries, just four- to eight-millionths of a meter in diameter, using specially designed viruses. The hope is that these tiny batteries — which could be used in embedded medical sensors — and eventually other electronics, could be printed easily and cheaply onto surfaces and woven into fabrics. Viruses are very orderly little critters and in high concentrations organize themselves into patterns, without high heat, toxic solvents or expensive equipment. By tweaking their DNA, the viruses, called M13, can be programmed to bind to inorganic materials, like metals and semiconductors. So far, the researchers have been able to use viruses to assemble the anode and electrolyte, two of the three main components of a battery. Eventually the work could also be used to make tiny electronics made up of silicon-covered viruses. Gross and cool. “It’s not really analogous to anything that’s done now,” lead researcher Angela Belcher told MIT Technology Review late last year when describing her work. “It’s about giving totally new kinds of functionalities to fibers.” The idea of thread-like electronics has gotten the interest of the Army, which has been funding Belcher’s research through the Army Research Office Institute of Collaborative Biotechnologies and the Army Research Office Institute of Soldier Nanotechnologies. Theoretically, these fibers could be woven into soldiers’ uniforms allowing clothing to sense biological or chemical agents as well as collect and store energy from the sun to power any number of devices. The team still has to create a cathode for the battery, but so far, so good; the researchers note that when a platinum cathode is attached, “the resulting electrode arrays exhibit full electrochemical functionality.” Belcher has also successfully created fibers that glow under UV light, tiny cobalt oxide wires and has even developed viruses that bind to gold. We’re still waiting to see some viral bling.
Three-minute illustrated videos that will teach children and youth about the importance of physical and health literacy in a fun and engaging way. The first video in the three-part series is intended for children aged 4-9. In this video, children will be introduced to the concepts of physical and health literacy. A general overview of both terms will be introduced and defined in order to help children and youth build the ground level knowledge needed to lead a healthy and active life every day. The second video is for children and youth aged 8-13 and explores the concepts as related to the world around them. Children and youth will gain a deeper understanding of these concepts which lead to a healthy and active life every day. Part 3, Applying Physical and Health Literacy is recommended for youth aged 12-18. It allows youth the opportunities to apply the concepts of physical and health literacy in their own world.
Artificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating "artificial brains" plays three important roles in science: - An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience. - A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being. - A serious long term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI. An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, Motor Neurone and Parkinson's Disease. The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus' critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence". The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009. Approaches to brain simulation Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation. In November 2008, IBM received a US$4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons "in the computer" by placing thirty million synapses in their proper three-dimensional position. Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050. While Blue Brain is able to represent complex neural connections on the large scale, the project does not achieve the link between brain activity and behaviors executed by the brain. In 2012, project “Spaun” (Semantic Pointer Architecture Unified Network) attempted to model the human brain through large-scale representations of neural connections that generate complex behaviors in addition to mapping. Spaun's design recreates elements of human brain anatomy. The model, consisting of approximately 2.5 million neurons, includes features of the visual and motor cortices, GABAergic and dopaminergic connections, the ventral tagmental area (VTA), substantia nigra, and others. The design allows for several functions in response to eight tasks, using visual inputs of typed or handwritten characters and outputs carried out by a mechanical arm. Spaun's functions include copying a drawing, recognizing images, and counting. There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. Another near term limitation is that all current approaches for brain simulation require orders of magnitude larger power consumption compared with a human brain. The human brain consumes about 20 W of power whereas current supercomputers may use as much as 1 MW or an order of 100,000 more. Artificial brain thought experiment Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI - What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. - Artificial intelligence - Intelligent system - Artificial Intelligence System - Artificial life - Biological neural networks - Blue Brain - Cognitive architecture - Human Brain Project - Multi-agent system - Simulated reality Notes and references - Artificial brain '10 years away' 2009 BBC news - Aston University's news report about the project - The critics: - Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417–457, doi:10.1017/S0140525X00005756, retrieved May 13, 2009 - Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 0-06-090613-8 - Penrose, Roger (1989), The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics, Oxford University Press, ISBN 0-14-014534-6 - Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423, retrieved 2008-08-18 - Voss, Peter (2006), Goertzel, Ben; Pennachin, Cassio, eds., Essentials of general intelligence Artificial General Intelligence, Springer, ISBN 3-540-23733-X - see Artificial Intelligence System, CAM brain machine and cat brain for examples - Jung, Sung Young, "A Topographical Development Method of Neural Networks for Artificial Brain Evolution", Artificial Life, The MIT Press, vol. 11, issue 3 - summer, 2005, pp. 293-316 - Blue Brain in BBC News - (English) Jaap Bloem, Menno van Doorn, Sander Duivestein, Me the media: rise of the conversation society, VINT research Institute of Sogeti, 2009, p.273. - , A Large-Scale Model of the Functioning Brain. - Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence 171 (18, Special Review Issue): 1161–1173. doi:10.1016/j.artint.2007.10.011. Retrieved April 1, 2009. - Fox and Hayes quoted in Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, p581 Morgan Kaufmann Publishers, ISBN 978-1-55860-467-4
Activities for Teaching Statistics and Research Methods: For individuals in the U.S. & U.S. territories A solid understanding of statistics and research methods is essential for all psychologists, and these topics are core components of both Advanced Placement and undergraduate psychology curricula. Yet, these courses are often difficult for many students, some of whom may burn out and even give up on psychology altogether. To address this problem, Stowell and Addison offer a comprehensive collection of original, pedagogically sound, classroom-tested activities that engage students, teach principles, and inspire teachers. Each chapter contains classroom exercises in a particular topic area that are practical and easily implemented and that help students learn core principles in ways that are fun and interesting. Whether illustrating basic concepts like variance and standard deviation, correlation, p values and effect sizes, or teaching strategies for identifying confounding factors, recognizing bias, constructing surveys, and understanding the ethics of behavioral research, each chapter offers clear and compelling tools for engaging students on conceptual and practical levels. A handy table organizes activities by topic area, class level, and length of time to complete, so instructors can quickly pinpoint the content they need. Jeffrey R. Stowell and William E. Addison - Reducing Anxiety in the Statistics Classroom - How to Lie With the Y-Axis Thomas E. Heinzen - Summarizing Data Using Measures of Central Tendency: A Group Activity Thomson J. Ling - How Fast Is Your Internet? An Activity for Teaching Variance and Standard Deviation Bonnie A. Green and Jeffrey R. Stowell - Getting Dicey: Thinking About Normal Distributions and Descriptive Statistics Robert McEntarffer and Maria Vita - A Low-Anxiety Introduction to the Standard Normal Distribution and Measures of Relative Standing Laura Brandt and William E. Addison - Using the Heat Hypothesis to Explore the Statistical Methods of Correlation and Regression George Y. Bizer - Active Learning for Understanding Sampling Distributions David S. Kreiner - Testing Students for ESP: Demonstrating the Role of Probability in Hypothesis Testing William E. Addison - Using a TV Game Show Format to Demonstrate Confidence Intervals - Real-Life Application of Type I and Type II Decision Errors Bernard C. Beins - Factors That Influence Statistical Power Michael J. Tagler and Christopher L. Thomas - An Interdisciplinary Activity for p Values, Effect Sizes, and the Law of Small Numbers Andrew N. Christopher II. Research Methods - An Activity for Teaching the Scientific Method R. Eric Landrum - Linking Identification of Independent and Dependent Variables to the Goals of Science Mary E. Kite - Everything Is Awesome: Building Operational Definitions With Play-Doh and LEGOs Stephanie E. Afful and Karen Wilson - A Demonstration of Random Assignment That Is Guaranteed to Work (95% of the Time) Thomas P. Pusateri - Identifying Confounding Factors in Psychology Research - Demonstrating Experimenter and Participant Bias Caridad F. Brito - The Most Unethical Researcher: An Activity for Demonstrating Research Ethics in Psychology - The Ethics of Behavioral Research Using Animals: A Classroom Exercise (PDF, 909KB) - Demonstrating Interobserver Reliability in Naturalistic Settings Janie H. Wilson and Shauna W. Joye - Using a Classic Model of Stress to Teach Survey Construction and Analysis Joseph A. Wister - Using Childhood Memories to Demonstrate Principles of Qualitative Research Steven A. Meyers - Using a Peer-Writing Workshop to Help Students Learn American Psychological Association Style Dana S. Dunn About the Editors Jeffrey R. Stowell, PhD, earned his doctoral degree in psychobiology from The Ohio State University. He is a professor and the assistant chair of the psychology department at Eastern Illinois University (EIU), where he teaches courses in biological psychology, sensation and perception, learning, and introductory psychology. He has published articles in Teaching of Psychology, Scholarship of Teaching and Learning in Psychology, and other teaching-related journals on the use of technology in teaching. He presents regularly at regional psychology conferences and mentors undergraduate and graduate student research. He participated in the 2008 National Conference on Undergraduate Education in Psychology: A Blueprint for the Future of the Discipline. He received the Society for Teaching of Psychology's Early Career Teaching Award and served as the society's Internet editor for 8 years. At EIU, Dr. Stowell has earned the honors of Professor Laureate and Distinguished Honors Faculty Award; he is a three-time winner of the Psi Chi Chapter Faculty of the Year Award and has received the College of Sciences' highest awards in three different areas (teaching, research, and service). William E. Addison, PhD, is a professor in the psychology department at EIU, where he has regularly taught courses in statistics and research methods. He is a Fellow and former president of APA Division 2 (Society for the Teaching of Psychology), and he is a charter Fellow of the Midwestern Psychological Association. He has served as a consulting editor and reviewer for the journal Teaching of Psychology, as a member of the GRE Psychology Test Development Committee, and as a faculty consultant for the annual Advanced Placement Exam in Psychology. He participated in the 1999 National Forum on Psychology Partnerships and the 2008 National Conference on Undergraduate Education in Psychology: A Blueprint for the Future of the Discipline. Dr. Addison presents regularly at annual meetings of APA and the Midwestern Psychological Association and at the Midwest Institute for Students and Teachers of Psychology. His publications include teaching-oriented articles in Teaching of Psychology and the College Student Journal. He has received a number of awards for his teaching, including the EIU Distinguished Faculty Award and the EIU Distinguished Honors Faculty Award. This highly useful guide includes contributions from a talented roster of experienced teachers from both high school and college ranks. Anyone who teaches statistics or research methods will find something they can immediately put to use in their class. Introductory psychology instructors who are looking for effective ways to communicate these often-confusing concepts will find a wealth of ideas in this volume. —Suzanne C. Baker, PhD Department of Psychology, James Madison University, Harrisonburg, VA You'll increase your active-learning repertoire with these engaging activities from an all-star group of statistics and research methods instructors. Each classroom-ready activity has step-by-step instructions and suggestions for assessment, too! —Susan A. Nolan, PhD Professor, Department of Psychology, Seton Hall University, South Orange, NJ
“Coyote” is a corruption of the ancient Aztec word for ‘barking dog’, coyotl. The curious and playful animal has a strong role in Native American myths as the Trickster who brings fire to the people, but often gets mixed up in pranks that go awry. Coyotes are found in nearly every type of habitat in California from deserts to mountains, and from wild lands to urban areas. It is estimated that there are between 250,000-750,000 coyotes living throughout California. They are extremely adaptable and can survive on whatever food is available. While they typically forage for birds, mice, insects, fruits, and rabbits, they have been known to pick through garbage cans and attack pets. - Physical Description: The coyote (Canis latrans) is a member of the dog family and is native to California. It resembles a small shepherd dog in size and shape, with large erect ears, a long snout and bushy black-tipped tail. Their long hair varies in color with geography and season from pale gray to rich reddish-brown. - Behavior: Coyotes typically mate for life and breed once a year, producing a litter of about six pups. Dens are located in steep banks, rock crevices or underbrush. Sometimes a coyote will take over holes dug by badgers or foxes. Both parents share in rearing the young, and non-breeding yearlings often stay with adult parents to help care for younger pups. Coyotes are active day or night, but are most often seen at dusk and dawn. - Lifespan: Coyotes in the wild live to be about 14 years. How to distinguish between a coyote and a gray fox - Coyotes have a bushy tail with a dark tip. Their light colored front legs have dark vertical lines between knee and paw. They weigh between 20 and 40 pounds. - Gray foxes have a black line of fur from the base of the spine to the tip of the tail. They weigh between 8 and 15 pounds. Never feed a coyote Deliberately feeding a coyote puts people and pets at risk. Reducing conflicts between coyotes and humans depends in part on coyotes retaining their naturally cautious attitude around humans. Coyotes can lose their natural fear of people and become bold, and even aggressive. Feeding coyotes is a cite-able offense on District lands. If you see a person feeding coyotes, please notify our office with a description of the persons and vehicles involved. What to do if you encounter a coyote Although coyotes are curious by nature, and may be interested in watching human activity, they are also shy and will not approach unless they have become too accustomed to humans. - Keep your distance and do not approach the animal. - Keep your pets on leash. - If a coyote approaches you or your pet, throw rocks or sticks to frighten it. - Use a loud authoritative voice to frighten the animal. Report threats or attacks immediately If you see a coyote behaving aggressively toward people or pets, contact a ranger or the District office as soon as possible. The threat to public safety will be assessed and appropriate action will be taken.
When first removed from a well, natural gas is laden with water vapor. If the gas were to be transmitted for use with large quantities of this water vapor, degradation of the infrastructure would occur in the form of corrosion as well as blockage from natural gas hydrates. To ensure the longevity of the infrastructure and the quality of the final product, natural gas processors run the pure product from the wellhead through various separation steps, one of them being dehydration. Much of the free liquid water contained in a natural gas stream can be removed using a series of pipeline drips installed near the well head and at other strategic locations along gathering lines. However, dehydration methods must be applied to remove water vapor held in solution with the gas. These methods may use the principle of absorption —taking up of a substance through the entire volume of a dehydrating material — or adsorption — collecting a substance adhered to a surface. The concept of absorption is used in glycol dehydration, which capitalizes on a chemical called glycol that has a natural affinity for bonding with water. Natural gas moves up through a tower, contacting a downward-dripping glycol solution that absorbs water from the gas and collects at the base of the tower for removal, with the gas rising out of the tower. On the other hand, solid-desiccant dehydration uses the principle of adsorption, with gas passing from top to bottom of a tower containing desiccant material. Water molecules in the gas adhere to the desiccant as the natural gas passes through, with the gas ultimately leaving dry at the bottom. Finally, aside from absorption and adsorption, natural gas can also be expanded through a valve or plug using the Joule–Thompson effect, allowing partial condensation of water vapor, which can then be removed by a demister. Any of these three methods may be employed in natural gas production depending on a producer’s energy demands and efficiency needs.
The Vienna Convention on Diplomatic Relations of 1961 is an international treaty, accepted by 189 states till date, that defines a guideline for diplomatic relations between numerous independent countries. It specifies the privileges of a diplomatic mission that enable the diplomats to perform their diplomatic functions without the fear of any legal trouble or harassment from the host country. This forms the legal basis for the diplomatic immunity. The articles of the Vienna Convention are considered as a cornerstone for modern international relations. According to the Vienna Convention on Diplomatic Relations of 1961 (VCDR), diplomatic immunity is granted to only certain individuals depending on their rank and the amount of immunity they require to carry out their official duties without legal harassment from the host nation. Diplomatic immunity allows foreign representatives to work in host countries without fully understanding the customs of that country. However, diplomats are expected to respect and follow the laws and regulations of their host countries. Article 31 of the Convention exempts diplomatic agents from the civil and criminal jurisdictions of host states, except for cases in which a diplomatic agent (1) is involved in a dispute over personal real property, (2) has an action involving private estate matters or (3) is in a dispute arising from commercial or professional business outside the scope of official functions. The Vienna Convention on Consular Relations of 1963 (VCCR) is an international treaty that defines the guidelines for consular relations between the independent countries. A consul normally operates out of an embassy in a different country, and performs two functions: (1) protecting the interests of the country and the countrymen of the consul, and (2) furthering the commercial and economic relations between the two countries. While a consul is not a diplomat, they work out of the same premises, and under this treaty they are afforded most of the same privileges, including a variation of diplomatic immunity called consular immunity. This treaty has been accepted by 176 countries. Consular immunity offers protections similar to the diplomatic immunity, but these protections are not as extensive, given the functional differences between consular and diplomatic officers. For example, consular officers are not given absolute immunity from a host country’s criminal jurisdiction (they may be tried for certain local crimes upon action by a local court) and are immune from local jurisdiction only in cases directly relating to their consular functions.
Genes are instructions for how the cells of the body should function and grow. When changes called mutations happen in certain genes, cells can grow out of control and become cancerous. Some gene changes are inherited, meaning they are passed from parent to child. When cancer develops because of an inherited gene change, it is hereditary cancer. People with an inherited gene mutation may have a much higher risk of developing a cancer than most people. These people are said to have a hereditary cancer syndrome. Cancer is a common disease, but less than 10% of cases are hereditary. Family members may wonder if they could be at higher risk of cancer, especially if several family members have been diagnosed. If you are concerned about the history of cancer for you and/or your family, you may benefit from the Hereditary Cancer Clinic. Some of the features seen in families appropriate for referral include: For breast cancer families: For colon and rectal cancer families: Other tumors/cancers that may warrant referral: If you are concerned about your risk of cancer, genetic counseling can help you understand your risks and options. By understanding your personal cancer risk, you may be able to: Genetic testing is available for several genes known to cause hereditary cancer syndromes. Genetic testing can: Genetic testing is usually done with a small blood sample. Sometimes, a mouth rinse sample can be used instead of blood. If possible, genetic testing should be started in a person who has had cancer. This is because a family member with cancer has the highest chance of having a gene mutation. If all family members that have had cancer have passed away, genetic testing can be started in a person without cancer first.
Coronary microvascular disease (MVD) affects the heart's smallest coronary arteries. Coronary MVD is a new concept. It's different from traditional coronary artery disease (CAD). In CAD, plaque builds up in the heart's large arteries. This buildup can lead to blockages that limit or prevent oxygen-rich blood from reaching the heart muscle. Coronary MVD occurs in the heart's tiny arteries when: Plaque forms in the arteries. Plaque is made up of fat, cholesterol, calcium, and other substances found in the blood. It narrows the coronary arteries and reduces blood flow to the heart muscle. As a result, the heart doesn't get the oxygen it needs. This is known as ischemic heart disease, or heart disease. In coronary MVD, plaque can scatter, spread out evenly, or build up into blockages in the tiny coronary arteries. Arteries spasm (tighten). Spasms of the small coronary arteries also can prevent enough oxygen-rich blood from moving through the arteries. This too can cause ischemic heart disease. Walls of the arteries are damaged or diseased. Changes in the arteries' cells and the surrounding muscle tissues may, over time, damage the arteries' walls. Signs & Symptoms Pressure or squeezing in the chest Shortness of breath Arm or shoulder pain Fatigue (tiredness) and lack of energy Death rates from heart disease have dropped quite a bit in the last 30 years. This is due to improved treatments for conditions such as blocked coronary arteries, heart attack, and heart failure. However, death rates haven't improved as much in women as they have in men. Diagnosing coronary MVD has been a challenge for doctors. Most of the research on heart disease has been done on men. Standard tests used to diagnose heart disease have been useful in finding blockages in the coronary arteries. However, these same tests used in women with symptoms of heart disease—such as chest pain—often show that they have "clear" arteries. Standard tests look for blockages that affect blood flow in the large coronary arteries. However, these tests can't detect plaque that forms, scatters, or builds up in the smallest coronary arteries. The standard tests also can't detect when the arteries spasm (tighten) or when the walls of the arteries are damaged or diseased. As a result, women are often thought to be at low risk for heart disease. Coronary MVD is thought to affect up to 3 million women with heart disease in the United States. Most of the information known about coronary MVD comes from the National Heart, Lung, and Blood Institute's WISE study (Women's Ischemia Syndrome Evaluation). The WISE study began in 1996. Its goal was to learn more about how heart disease develops in women. The role of hormones in heart disease has been studied, as well as how to improve the diagnosis of coronary MVD. Further studies are under way to learn more about the disease, how to treat it, and its outcomes. Source: "Heart and Vascular Diseases." Diseases and Conditions Index. The National Heart, Lung, and Blood Institute. The National Institutes of Health.
Hundreds of students in Sacramento and Yolo counties have come down with norovirus, a highly contagious gastrointestinal illness, in recent days. Yolo County health officials report 2,091 cases of norovirus, while Sacramento County has seen suspected cases in six school districts. Here’s some more information on the virus: 1. Noroviruses are the most common cause of acute stomach and intestinal infections in the United States, reports the U.S. Food and Drug Administration. It’s also sometimes called stomach flu, viral gastroenteritis or the winter vomiting bug. The U.S. reports 19 million to 21 million cases a year. Humans are the only hosts of the virus. The virus was formerly known as the Norwalk virus, because the first known outbreak took place at an elementary school in Norwalk, Ohio, according to norovirus.com. Scientists identified the virus in 1972 from stool samples stored after the outbreak. It was officially renamed norovirus by the International Committee of Taxanomy of Viruses in 2002. 2. It’s extremely contagious. The Centers for Disease Control cautions that norovirus can be transmitted by infected people, contaminated food or water, or just by touching contaminated surfaces. People with norovirus are most contagious during the illness and for a few days afterward, and the virus can remain in stool for up to two weeks after the illness. The virus can survive temperature extremes, too. Also, catching norovirus doesn’t help you fight it off later, in part because there are many different types of noroviruses – catching one doesn’t protect you from the others. 3. Diarrhea, cramps and vomiting usually start within 12 to 48 hours of exposure to the virus, says the Mayo Clinic. Norovirus symptoms normally last one to three days, and most people recover without treatment. But infants, older adults and people with chronic illnesses may require medical attention for dehydration. Since it’s a virus, antibiotics aren’t any help, and there are no antiviral drugs for noroviruses. The Mayo Clinic advises that people with norovirus take special care to replace fluids lost by vomiting or diarrhea to prevent dehydration. Drinks like Pedialyte are good for young children, while sports drinks and broths are suggested for adults. But sugary drinks, like sodas and fruit juices, can make diarrhea worse, while alcohol or caffeinated drinks can speed dehydration. Soup, bananas, yogurt and broiled vegetables are good choices to help reduce vomiting. 4. Good hygiene is the key to avoiding noroviruses, suggests WebMD.com. Wash your hands frequently with soap and water, particularly after using the bathroom and before preparing food. Alcohol-based cleaners are not as effective. The site also advises carefully throwing away contaminated items, such as dirty diapers. Wash raw fruits and vegetables, and cook oysters and other shellfish thoroughly. Clean and disinfect surfaces with a mixture of detergent and chlorine bleach after someone’s sick, WebMD says. And if you have norovirus, don’t prepare food for at least two to three days after you feel better. 5. Cruise ships, nursing homes, daycare centers and, of course, schools are common breeding grounds for norovirus – anywhere large numbers of people are packed in close quarters, basically, reports the CDC. Outbreaks on cruise ships frequently make the news – 90 passengers reportedly fell ill on the Sun Princess in Australia in February – and there are countless travel websites dedicated to tracking cruise lines with the worst records for the illness.
[image:12740 align=left hspace=1] [image:13513 align=left hspace=1]The leaves and fruit of Taxus baccata, Linné. COMMON NAME: Yew tree. ILLUSTRATION: Bentley and Trimen, Med. Plants, 253. Botanical Source and History.—The common yew is a large evergreen tree, native throughout Europe, and very commonly planted as an ornamental tree in church yards, cemeteries, etc., especially in England. It is of very slow growth, and has a hard, close-grained wood that is much used by cabinet-makers. The branches are long and horizontal, the lowest ones proceeding from the trunk at only a few feet from the ground. The bark is of a dark-brown color, and does not split longitudinally, like the bark of most trees, but scales off in thin plates. The flowers appear in early spring, and the male and female are borne on separate trees. The male are in axillary aments, and consist, each, of a thickened axis, having anther-bearing bracts on the upper half, surrounded at the base by imbricated scales. The female flowers are, each, a single sessile, naked, ovule, without either style or stigma, surrounded at the base by a circular disk, which becomes fleshy in the fruit. The fruit is a single oval seed, covered, excepting at the apex, by a thick, fleshy, red cup, and resembles an abortive acorn surrounded by its cup. The cups of the yew fruit are sweetish, though unpleasant to the taste, often, however, being eaten by children. The leaves are very numerous, linear, about 1/2 inch long, and 1/12 inch wide, sharp, dark-green above, lighter beneath, alternate, and are curved outwardly and upwardly. An American variety (var. Canadensis, Gray) of the Taxus baccata, is a small shrub, found growing in shady, moist places in Canada and the northern parts of the United States. History.—Excepting the pulp of the fruit, all parts of the yew tree are poisonous. Pliny, Dioscorides, and other ancient writers, mention the poisonous properties of the leaves and seed, and it has been recorded that wine, preserved in casks made of its wood, has occasioned the death of those who drank it. Strabon states that the Gauls used the juice of the leaves as a poison for their arrows. More recent observations have confirmed the statements as to its toxic character and we frequently read of animals and birds that have been poisoned by the leaves and berries. It is likewise stated that the exhalation emanating from the tree may occasion vertigo, lethargy, and a kind of drunkenness, and that even death may ensue to those who carelessly sleep beneath its branches. Prof. Redwood read a paper before the Pharmaceutical Society of Great Britain (1877), citing the fatal result from drinking a decoction of the leaves. Chemical Composition.—The poisonous principle of yew resides in an alkaloid taxine, first isolated from the leaves by H. Lucas (Archiv der Pharm., 1856, Vol. CXXXV, p. 145), and later from the leaves and fruits by Marmé (Amer. Jour. Pharm., 1876, p. 353), and by A. Hilger and Fr. Brande (ibid., 1890, p. 297). According to Marmé, it is a white crystalline powder, little soluble in water, soluble in ether, alcohol, chloroform, etc.; insoluble in petroleum-ether. It melts at 80° C. (176° F.), and produces a red color with concentrated sulphuric acid. Amato and Caparelli (ibid., 1881, p. 56) obtained from the leaves a probably similar alkaloid, volatile oil, and a non-nitrogenous, crystalline substance, which they called milouin. Action, Medical Uses, and Dosage.—The symptoms occasioned by the juice or extract of the leaves, vary according to the quantity that has been taken. In large doses, there is pallor, vertigo, spasm, and symptoms of collapse, with gastric and enteric irritation, enfeebled and deranged cardiac action, coma, and death. Not unfrequently, very large doses are followed by a prompt diminution of the vital forces, or by positive syncope, in either case speedily terminating in death, without any of the severe symptoms of irritation being manifested. Post-mortem investigations have found some indications of inflammation of the stomach and bowels, of active renal congestion, and of diminished heart-power, with a greater or lesser deprivation of the coagulable quality of the blood. It has been used in the attempt to produce abortion, but always fruitlessly, and in some instances has proved a fatal experiment. In cases of poisoning by the ingestion of this article, it should be promptly removed from the stomach by emesis, after which milk or other bland drinks may be administered, at the same time sustaining the strength, if necessary, by the prudent exhibition of stimulants. The red berries are not injurious. Percy, in former times, prepared a jelly and a syrup from them, which he used in cough, chronic bronchitis, and to relieve the pain in calculous nephritis. The leaves have likewise been recommended in certain maladies, which it is unnecessary to name, as we have no satisfactory evidence of their efficacy. The leaves have been given in doses of 1 to 5 grains; or the infusion may be given corresponding in dosage. King's American Dispensatory, 1898, was written by Harvey Wickes Felter, M.D., and John Uri Lloyd, Phr. M., Ph. D.
In what ways does Wright portray Bigger’s day-to-day existence as a prison, even before his arrest and trial? The crowded, rat-infested apartment Bigger shares with his brother, sister, and mother is, in a sense, a prison cell. Bigger is imprisoned in the urban ghetto by racist rental policies. Likewise, his own consciousness is a prison, as a sense of failure, inadequacy, and unrelenting fear pervades his entire life. Racist white society, Bigger’s mother, and even Bigger himself all believe that he is destined to meet a bad end. Bigger’s relentless conviction that he faces an inevitably disastrous fate indicates his feeling that he has absolutely no control over his life. Society permits him access only to menial jobs, poor housing, and little or no opportunity for education—on the whole, he has no choice but a substandard life. Describe the real estate practices that were applied to black families in Chicago’s South Side in the 1930s. With these practices in mind, why is Mr. Dalton—an avowed philanthropist toward blacks—a hypocrite? Although ample housing was available in most sections of 1930s Chicago, white property owners imposed agreements that enabled blacks to rent apartments only on the city’s South Side. These limitations created an artificial housing shortage, allowing landlords to increase rents on the South Side despite the deplorable conditions of many of their buildings. Mr. Dalton has earned much of his fortune from such racist rental policies, which he considers customary and does not even think to consider unethical. In this manner, Mr. Dalton contributes significantly toward the social disparities that terrify, oppress, and enrage blacks such as Bigger. Given his actions, Mr. Dalton’s charitable donations to the black community are merely meaningless tokens—condescending and patronizing gestures. Mr. Dalton expresses his so-called benevolence by giving Bigger a menial job, but, as Max says, Dalton does so only in an attempt to erase the guilt he feels for his role in oppressing blacks in the first place. Describe Jan and Mary’s attitude toward race relations. In what ways does their more subtle racism resemble the more overt prejudice of other whites? To Jan and Mary, breaking social taboos is a thrill. They derive an odd satisfaction from eating in a black restaurant with Bigger. They clearly want to experience “blackness,” yet come nowhere near an understanding of the frustration and hopelessness that constitutes blackness for Bigger. Mary and Jan are, in effect, merely entertaining themselves by slumming in the ghetto with Bigger. Like the Daltons, then, they are blind to the social reality of blackness. Moreover, Mary uses the same language that racists such as Peggy use to describe black Americans. When talking to Bigger, Mary uses the phrase “your people” and refers to black Americans as “they” and “them.” Her language implies that there is an alien, foreign aura to black Americans, that they are somehow a separate, essentially different class of human beings. Mary’s remark about “our country” is also telling, as it indicates that she assigns ownership of America to white people in her mind. In the act of claiming that “[t]hey’re human,” Mary still maintains a psychological division between white and black Americans. Although she briefly seems to recognize Bigger’s feelings, she still has not reached the point at which she can say, “We’re human.” How does Bigger’s desperate flight from the police symbolize his existence as a whole? The manhunt, which is conducted entirely by whites, literally corrals Bigger in an shrinking cross-section of Chicago. “Whiteness” pursues Bigger through an intense building-by-building search of the entire South Side. Like a cornered rat, Bigger desperately moves within this ever shrinking square, trying to evade the “whiteness” that has, in a sense, cornered and corralled him his entire life. This “whiteness” has always pursued Bigger, policed him, and stood ready to punish him if he crosses the “line.” The snowstorm that rages during the manhunt is a literal symbol of this metaphorical “whiteness,” surrounding and crippling Bigger by preventing him from leaving the city. Like the waves of white men searching for him, the snow falls relentlessly around Bigger, locking him in place. Literally and symbolically, “whiteness” falls on Bigger’s head with the power of a natural disaster. As Wright portrays it, how does the psychology of racial prejudice contribute to Bigger’s transformation into a murderer and a criminal? In killing a white woman, Bigger does what the white American majority has always feared he might do. The whites are convinced that he raped Mary first—a violation of the ultimate social taboo that separates black men from white women. In an effort to keep Bigger from doing what they have feared, the empowered majority of whites have narrowed the boundaries of his existence and kept him in constant fear. Instead of ensuring his submission, however, this confinement has caused Bigger to respond to his overwhelming fear of “whiteness” by doing exactly what the empowered majority always feared he would do. In response to his crime, the white-dominated press and authorities incite mob hatred against him. They portray Bigger as bestial, inhuman rapist and killer of white women. This viciously racist portrayal of Bigger—and the white mob fury it engenders—gives the whites a justification to terrorize all of the South Side in an attempt to frighten the entire black community. In this chain of events, Wright depicts the irrational logic of racism, effectively a vicious cycle that reproduces itself over and over again. Is Bigger’s trial a fair one? In Wright’s portrayal, how does racism affect the American judicial process? What role does the media play in determining popular conceptions of justice? Bigger’s trial is unfair from the start, and it is clear that the proceedings are merely a spectacle. Bigger’s guilt and punishment are decided before his trial ever begins, perhaps even before he is arrested. The newspapers do not refer to him as the suspect or the accused, but rather as the “Negro Rapist Murderer.” There is no question that Bigger will be sentenced to death. Nonetheless, the public still feels the need to go through the motions of justice. The public may desire to build a wall of hysteria surrounding Bigger in order to justify its racist stereotypes, yet it also attempts to deny its racism by creating the illusion of equal treatment under the law. As Max argues later, there is a component of guilt in this hateful hysteria, as it represents an attempt on the part of the empowered majority to deny its responsibility in Bigger’s crimes. The illusion of equality under the law disguises the economic inequality that has condemned Bigger to a hopeless, impoverished urban ghetto and a series of menial low-wage jobs. Edward Robertson, an editor of the Jackson Daily Star, states that keeping the black population in constant fear ensures its submission. However, as Bigger’s life demonstrates, this constant fear actually causes violence. In this sense, the empowered majority sows the seeds of minority violence in the very act of trying to quell it. 1. Describe the psychological and behavioral change that overcomes Bigger during the interview with Mr. Dalton. Why does he change in the presence of Mr. Dalton? In what way is it significant that Bigger goes to the movies before going to the Daltons’? 2. What are some of the real historical events that occur or are mirrored in Native Son? How does Wright weave these events into his fictional narrative, and how does this technique affect the novel as a whole? 3. What role does imagery of vision and sight play in Native Son? Think especially of Mrs. Dalton’s blindness and Bigger’s murder of Mary. 4. How does popular culture serve as a form of indoctrination throughout Native Son?
Originally posted by Sinter Klaas The Sahara has not always been a desert you know. There is evidence that shows it was a green heaven during man's existence. This postulates that there is no single "cradle", but several independent developments of civilization, of which the Near Eastern Neolithic was the first. The climate of modern Antarctica is extreme. Located over the South Pole and in total darkness for six months of the year, the continent is covered by glacial ice to depths in excess of 3 km in places. Yet this has not always been the case. 50 Ma ago, even though Antarctica was in more or less the same position over the pole, the climate was much more temperate – there were no glaciers and the continent was covered with lush vegetation and forests. So how did this extreme change come about? The modern climate of Antarctica depends upon its complete isolation from the rest of the planet as a consequence of the Antarctic Circumpolar Current that completely encircles Antarctica and gives rise to the stormy region of the Southern Ocean known as the roaring forties. The onset of this current is related to the opening of seaways between obstructing continents. Antarctica and South America were once joined together as part of Gondwana and were the last parts of this original supercontinent to separate. By reconstructing continental positions from magnetic and other features of the sea floor in this region, geologists have shown that the Drake Passage opened in three phases between 50 Ma and 20 Ma, as illustrated in Figure 32. At 50 Ma there was possibly a shallow seaway between Antarctica and South America, but both continents were moving together. At 34 Ma the seaway was still narrow, but differential movement between the Antarctic and South American Plates created a deeper channel between the two continents that began to allow deep ocean water to circulate around the continent. Finally, at 20 Ma there was a major shift in local plate boundaries that allowed the rapid development of a deep-water channel between the two continental masses. Today, the Antarctic Circumpolar Current is the strongest deep ocean current and its strength is responsible for the ‘icehouse’ climate that grips the planet. The opening of the Drake Passage had both a local and a global effect, initially cooling the climate of Antarctica from temperate to cold and ultimately playing an important role in the change from global ‘greenhouse’ conditions 50 Ma ago to the global ‘icehouse’ of today. Sea-temperatures vary from about −2 to 10 °C (28 to 50 °F). Cyclonic storms travel eastward around the continent and frequently become intense because of the temperature-contrast between ice and open ocean. The ocean-area from about latitude 40 south to the Antarctic Circle has the strongest average winds found anywhere on Earth. In winter the ocean freezes outward to 65 degrees south latitude in the Pacific sector and 55 degrees south latitude in the Atlantic sector, lowering surface temperatures well below 0 degrees Celsius; at some coastal points intense persistent drainage winds from the interior keep the shoreline ice-free throughout the winter. huge icebergs with drafts up to several hundred meters; smaller bergs and iceberg fragments; sea ice (generally 0.5 to 1 meter thick) with sometimes dynamic short-term variations and with large annual and interannual variations; deep continental shelf floored by glacial deposits varying widely over short distances; high winds and large waves much of the year; ship icing, especially May-October; most of region is remote from sources of search and rescue There is also evidence this was the case 250.000 years ago and according to the sources I found even as recent as 2000 to 4000 years ago. How do we know what would cause the Ocean Conveyor Belt to stop? The Younger Dryas period of colder temperatures, which occurred about 12 thousand years ago, occurred because of the shutdown of the Ocean Conveyor Belt (Joyce 2007). In An Inconvenient Truth, Al Gore states that the ocean overturning ceased because of the melted glaciers in modern-day Canada and the United States (now the Great Lakes), which spilled over into the North Atlantic, which is a critical region of overturning The analyses of ocean-floor sediments deposited recently by melting Antarctic ice sheets reveal that these ice sheets are only about 2,000 years old. Ocean-floor sediments drilled from Antarctic regions recently covered by ice shelves suggest that those shelves were only 2,000 years old. This finding could compel scientists to reassess whether the current destruction of polar ice is due primarily to human-caused global warming. Their analysis suggests that from about 2,000 to 5,000 years ago, much of the channel was seasonally open water. Sediment cores have been collected from the ocean bottom in an area, just north of Larsen B, exposed by ice-shelf disintegrations in the early 1990s. The cores indicate that the shelf there was only about 2,000 years old (SN: 9/8/01, p. 150: www.sciencenews.org...). However, a preliminary analysis of sediment layers in cores taken in December 2001 from seafloor near Larsen B suggests that this shelf has been in place for more than 12,000 years, says Eugene W. Domack, a marine geologist at Hamilton College in Clinton, N.Y. He'll present those results at an international workshop on Antarctic climate variability hosted by his college next week. It appears that before and after this period, the channel remained closed. The period when the channel was open coincides with a period of local warming supported by data gathered from land-based studies of lake sediments and ancient, abandoned penguin rookeries. With the return of colder conditions about 1900 years ago, the Prince Gustav ice shelf reformed until its recent retreat.
|KEY WORD : architecture / storehouses| |A form of storehouse which provided a fireproof place to keep valuables which were otherwise always at risk in highly combustible Japanese timber houses. It had a wooden structural frame externally covered by mud daub *tsuchikabe 土壁 about 20-30cm thick and usually finished with a smooth coat of white or black plaster *shikkui 漆喰. Internally the timber framework was exposed. Generally, in 17c urban centers, dozou had tiled roofs, but in districts where tile was difficult to obtain, the plasterwork covered the top of the structure to make it fireproof. It also had a second, independent outer structure roofed with thatch or shingles erected over it to protect the plaster from rain. Parts of walls exposed to rain were also tiled to protect plaster in certain districts. Openings were kept to a minimum, and thickly plastered swing doors and shutters with interlocking flanged reveals *kannonbiraki tobira 観音開扉, were used both for entrances and for the few small windows. Miniature roofs *kiriyokebisashi 霧除廂 were often provided to protect window and door surrounds from the elements. To prevent fire, the doors were closed and the joints sealed with mud that was kept in buckets nearby, rendering the dozou fireproof. Dozou became a symbol of wealth not merely because they were used to store valuables, but also because they were expensive and time consuming to construct. The low thermal conductivity of the thick walls kept the internal temperature of the dozou remarkably stable, also making it a suitable location for the manufacture of food products such as rice wine, sake 酒, and soya sauce, shouyu 醤油. It thus became an important form of proto-industrial architecture. The appearance of the word dozou in a donation letter of 1294 by the priest Shinsei 信聖, and on an illustration in the 14th roll of the early 14c "Kasuga Avatar Miracle Scroll" Kasuga Gongen genki-e 春日権現験記絵 confirm the existence of this building type by the end of the Kamakura period. Although dozou were used for storage purposes in the residences of the ruling classes, they are particularly associated with merchants, shounin 商人. Dozou appear within the plots of the larger merchant establishments in the early 17c screens *rakuchuu rakugai-zu byoubu 洛中洛外図屏風, but were still a relative rarity. They spread from major urban centers to provincial towns and farming villages during the Edo period and thereafter. Most dozou were 2 story, but 3 story dozou were a status symbol among wealthy merchants from the 17c and an example is preserved in the old quarter of Tondabayashi 富田林, near Osaka.| (C)2001 Japanese Architecture and Art Net Users System. No reproduction or republication without written permission.
Like all cetaceans Humpbacks are endangered. Their status varies somewhere between critical and imminent extinction depending on the list and who is making the prediction. The total world population may be as high as 50,000 which would represent a few percent of their numbers before whaling. Humpbacks are so named because of the arch of their backs under their dorsal fins as they sound. The first part of the scientific name “Megatera” is a Latin description of their “large winged” appearance as they fly through the water (or even through the air) on white flippers over a third of the length of their bodies. The second part “novaeangliae” means from New England and refers to whaling history and where the ships that commonly killed them were based. Despite the geographic specificity implied by their scientific name, they are one of the most widely dispersed animals ranging through almost all of the worlds seas. Where they feed in the summer defines three major populations: one in the southern hemisphere and Antarctic, another in the north Atlantic and the third in north Pacific. Each group migrates to the equatorial tropics to mate and bear their young and all three send pods to Costa Rica. They trap schools of small shrimp using a hunting technique unique in the sea. Bubble net feeding is an incredible display of cooperation where one or two whales swim circles around the target while releasing a continuous stream of air from their blowholes. They come up from below encircling the fish or krill in a net of bubbles trapping them between walls of air and the surface. The other members of the pod crash through the concentrated food source feeding at will. In addition to giving them their names Humpback’s physical features are used to identify them. They’re usually dark gray except for their flippers, undersides and tail flukes and photographs these lighter markings are used by scientists to identify and track the whales over their migrations. Humpbacks have warty protuberances called tubricales around their heads and on the edges of their flippers that aren’t found on other whales but these are not used for identification because they are more difficult to photograph and categorize than the fins and tail. Southern hemisphere humpbacks are larger and can exceed 18 meters (60 feet) in length and fifty tons, their cousins in the north are rarely longer than 15 meters (50 feet) and usually less than 40 tons. Humpbacks are the authors of the “whale song” that may travel thousands of miles at frequencies beyond human detection through deep water channeling. The songs are sung by males poised nearly motionless head down in the water. Whales and other cetaceans do not have vocal chords and the sounds they make are generated by compressed air pushed through their nasal cavities. The twenty minute overtures repeat with minor variations and may communicate challenges, dominance and overtures to potential mates.
Long before the dinosaurs, hefty herbivores called pareiasaurs ruled the Earth. Now, for the first time, a detailed investigation of all Chinese specimens of these creatures - often described as the 'ugliest fossil reptiles' - has been published by a University of Bristol, UK palaeontologist. Pareiasaurs have been reported from South Africa, Europe (Russia, Scotland, Germany), Asia (China), and South America, but it is not known whether there were distinct groups on each of these continents. In a new study published today in the Zoological Journal of the Linnean Society, Professor Mike Benton of Bristol's School of Earth Sciences shows there are close similarities between Chinese fossils and those found in Russia and South Africa, indicating that the huge herbivores were able to travel around the world despite their lumbering movement. Professor Benton said: "Up to now, six species of pareiasaurs had been described from China, mainly from Permian rocks along the banks of the Yellow River between Shaanxi and Shanxi provinces. I was able to study all of these specimens in museums in Beijing, and then visit the original localities. It seems clear there were three species and these lived over a span of one to two million years." Pareiasaurs were hefty animals, two to three metres long, with massive, barrel-shaped bodies, short, stocky arms and legs, and tiny head with small teeth. Their faces and bodies were covered with bony knobs. It is likely the pareiasaurs lived in damp, lowland areas, feeding on huge amounts of low-nutrition vegetation. No stomach contents or fossilized faeces from pareiasaurs are known to exist, but in Russia, pareiasaurs have been found with evidence they had made wallows in the soft mud probably to cool off or coat themselves in mud to ward off parasites. The new study confirms that the three Chinese pareiasaur species differed from each other in body size and in the shapes of their teeth. Professor Benton added: "My study of the evolution of pareiasaurs shows that the Chinese species are closely related to relatives from Russia and South Africa. Despite their size and probably slow-moving habits, they could walk all over the world. We see the same sequence of two or three forms worldwide, and there is no evidence that China, or any other region, was isolated at that time." Pareiasaurs were the first truly large herbivores on Earth, and yet their tenure was short. As in other parts of the world, the species in China were wiped out as part of the devastation of the end-Permian mass extinction 252 million years ago, when 90 per cent of species were killed by the acid rain and global warming caused by massive volcanic eruptions in Russia. Without forests, landscapes were denuded of soils which washed into the seas. Shock heating of the atmosphere and oceans as a result of the massive release of carbon dioxide and methane also killed much of life. The end-Permian mass extinction killed off the pareiasaurs after they had been on Earth for only 10 million years. 'The Chinese pareiasaurs' by Michael J. Benton in Zoological Journal of the Linnean Society
The Dark Ages The Dark Ages – Defining the Darkness The Dark Ages as a term has undergone many evolutions; its definition depends on who is defining it. Indeed, modern historians no longer use the term because of its negative connotation. Generally, the Dark Ages referred to the period of time ushered in by the fall of the Western Roman Empire. This took place when the last Western emperor, Romulus Augustulus, was deposed by Odoacer, a barbarian. AD 476 was the time of this event. Initially, this era took on the term “dark” by later onlookers; this was due to the backward ways and practices that seemed to prevail during this time. Future historians used the term “dark” simply to denote the fact that little was known about this period; there was a paucity of written history. Recent discoveries have apparently altered this perception as many new facts about this time have been uncovered. The Italian Scholar, Francesco Petrarca called Petrarch, was the first to coin the phrase. He used it to denounce Latin literature of that time; others expanded on this idea to express frustration with the lack of Latin literature during this time or other cultural achievements. While the term dark ages is no longer widely used, it may best be described as Early Middle Ages -- the period following the decline of Rome in the Western World. The Middle Ages is loosely considered to extend from 400 to 1000 AD. The Dark Ages – The State of the Church The Dark Ages was a period of religious struggle. Orthodox Christians and Catholics viewed the era from opposing perspectives. Orthodox Christians regarded this time as a period of Catholic corruption; they repudiated the ways of the Catholic Church with its papal doctrines and hierarchy. Orthodox Christians strove to recreate a pure Christianity, void of these “dark” Catholic ways. Catholics did not view this era as “dark.” Catholics viewed this period as a harmonious, productive religious era. The Dark Ages were also the years of vast Muslim conquests. Along with other nomads and horse and camel warriors, the Muslims rode through the fallen empire, wreaking havoc and seeding intellectual and social heresy in their wake. Muslim conquests prevailed until the time of the Crusades. This age old conflict between Christianity and Islam remains until this day. The Dark Ages – Faith vs. Enlightenment The Dark Ages were a tumultuous time. Roving horse-bound invaders charged the country sides. Religious conflicts arose; Muslims conquered lands. Scarcity of sound literature and cultural achievements marked these years; barbarous practices prevailed. Despite the religious conflicts, the period of the Dark Ages was seen as an age of faith. Men and women sought after God; some through the staid rituals of the Catholic Church, others in more Orthodox forms of worship. Intellectuals view religion in any form as, itself, a type of “darkness.” These thinkers assert that those who followed religious beliefs lied to themselves, creating a false reality. They were dominated by emotions, not fact. Religion was seen as contrary to rationality and reason, thus the move towards enlightenment -- a move away from “darkness.” Science and reason gained ascendancy, progressing steadily during and after the Reformation and Age of Enlightenment. To some extent, the period of the Dark Ages remains obscure to modern onlookers. The tumult of the era, its religious conflict and denigration, and debatable time period all work together to shroud the period in diminished light. The irony of this is that our 21st Century world is no less dark. It is an individual darkness, which multiplies and grows as those who reject God walk together and dominate politics, education, and society. Our age is characterized by every intellectual and technological advance but our morals have turned backwards. “But mark this: There will be terrible times in the last days. People will be lovers of themselves, lovers of money, boastful, proud, abusive, disobedient to their parents, ungrateful, unholy, without love, unforgiving, slanderous, without self-control, brutal, not lovers of the good, treacherous, rash, conceited, lovers of pleasure rather than lovers of God -- having a form of godliness but denying its power. Have nothing to do with them” (2 Timothy 3:1-5). These are the characteristics of true darkness. What is your response? Yes, today I am deciding to follow Jesus Yes, I am already a follower of Jesus I still have questions
A fun an interactive way to practice digraphs! Students use a pencil and a paper clip to make a homemade spinner. They spin a digraph and write them on the provided lines to make a word. After making the words students sort them! Are the words they made real? Or are they nonsense? This activity is provided for both beginning and ending digraphs! Also included is a digraph center! The center requires students to help the turkey find its feathers. Students have to look at the digraphs on the feathers and match it to the correct digraph turkey :) I hope you enjoy it!!
...refers to the relationship between living organisms which depend on each other form some resources or survival. What happens to an organism will affect what happens to other organisms. ...between members of the same or different species when they both need the same resource to survive. Plants need- space for leaves to absorb sunlight for photosynthesis; roots need soil space to absorb minerals and water Animals needs-space to breed eg nest; herbivores need plants ; carnivores need animals special features/ or types of behaviour which make an organism suited to its environment. Adaptions develop through evolution and increase chances of survival. Organisms living in exreme habitats need to have very special adaptations to survive... - Hydrothermal vents - discharge very hot water - The Antartic- temperatures are very low - High Altitudes- conditions are very windy and cold- shortage of oxygen Air pollution consists of- hydrocarbons, carbon dioxide, sulphur dioxide and carbon monoxide - released from factories and vehicles. Water pollution consists of- Sewage (human waste) Nitrates (fertilisers) and Phospahtes (waste water from fields - Land use - Use of raw materials All these have the effect of changing natural habitat and reducing the populations Lichen populations are very sensitive to sulphur dioxide air pollution and even low levels can kill them... Few lichens indicate high concentration of sulphur dioxide in the air and visa versa ...keeps ecosystems stable as environmental conditions change. These conditions include biotic factors (living things) and abotics factors (temperature humidity etc...) Conservation can lead to greater biodiversity by - preventing species from becoming extinict - mantaining variation within a species - preserving habitats Reforesation (plants new forests) Coppicing (cutting young tree down to encourage the growth of side shoots, replacement planting. ... is now seen as a very important part of substainable development Ozone is found in a thin layer above the Earth's atmosphere. It filters out harmful UV radiation which can cause skin cancer Chlorofluorocarbons have destroyed some of the ozone layer. Terrestrial-on dry land
Why is seed source important Many British native trees have been growing in the UK for over 8,000 years. During that time they have become well adapted to their local growing conditions, i.e. the climate, altitude and soil fertility. Seed collected from these trees will therefore produce the most suitable progeny for planting back into that locality when compared to using plants from continental Europe for example. For many years, plant specifiers have not asked enough questions regarding the source of the nursery stock they are planting. Much of the stock used to plant native woodlands has come from unsuitable sources outside the UK. Local provenance - What does it mean? Why is it important? The Forestry Commission has divided the UK into four Regions of Provenance; 10, 20, 30 and 40. These are then subdivided into ‘local seed zone provenances’ as shown on the Forestry Commission seed zone map. The local provenance boundaries are mainly based on natural boundaries formed by geological and climatic features of the UK. Maelor Forest Nurseries Ltd has been a key player in promoting the use of British seed sources, and endeavours to supply planting stock from as wide a range of UK provenances as possible each year. We recommend the use of the Forestry Commission's map of Seed Collection Areas (see below) and the associated table of native species to be encouraged in each zone. These are published in Forestry Commission Practice Note no. 8 and are helpful when choosing the right species for the right location. At Maelor, our seed collections are batched according to these areas. Plant specifiers should use the map as a tool for describing to nurserymen the provenance from which they wish to buy stock. The seed zone identity numbers shown on the map are listed in our catalogue against each batch of native plants. There are a number of reasons for using local provenance stock for native woodland creation purposes: Improved survival rates and productivity Trees grown from British seed sources are able to withstand British weather conditions. Generations of trees living and reproducing in one location results in the trees adapting to expect, and thrive on, the local prevailing amount of rain, warmth, light etc. Trees subjected to a differing climate than their genetic origin has programmed them for will probably be less productive than local provenance trees, if indeed they even survive. Protection of Environment and Wildlife If a tree from Italy, France or even the south of England is planted in Scotland it is likely to flush (open buds) much earlier than the existing local trees, as its genetics require a much shorter cold period to convince it that winter is over and spring has begun than the local trees do. Differences like this will have knock on effects, for example the wrong amounts or types of food or shelter may be available at critical times of year. Leaves, flowers or fruits may not develop in time for when local insects or animals require them, or may develop too early and be damaged e.g. by late frosts. Conservation of genetic material Importing of European tree sources has resulted in hybridisation of tree genetics which could result in significant changes to the aesthetics and ecology of our landscape, for example there are very few native black poplar left in the UK - virtually all poplar seeds contain either hybrid or entirely introduced genetics. As the UK’s climate alters the trees that have adapted well to the previous weather patterns may begin to struggle! Woodland management approaches are altering; creating productive woodlands for the future will probably be best achieved by planting a carefully considered variety of species and provenances. Effects of climate change on tree planting Global climate change is widely accepted - wetter, warmer winters are predicted in the UK coupled with hotter drier summers, in the near future. Although this pattern hasn’t been clearly seen in recent summers or winters, it is apparent that the UK weather is changing. Concerns over the productivity of British woodlands in the future British climate have led to much debate and consultation regarding the sourcing of planting stock, and research is currently underway regarding this topic. The overriding conclusion seems to be that we cannot predict the future weather, and the only prediction we can make is that we expect more frequent and more extreme weather events. The recurring ideas regarding insurance against a changing climate seem to include: - Plant species mixtures - not all species will be affected to the same extent and some will be much more tolerant to varying conditions than others. The range of tolerance of any given species is important to be aware of, as extreme weather events (e.g. sharp winter frosts / storms or summer heatwaves/ droughts / floods) may end the survival of some species that previously survived in Britain. - Provenance mixtures will provide some insurance, however tree species should be well matched to site conditions, e.g. trees from southern provenances may suffer frost damage in spring if planted on a more northerly site. The actual characteristics of the planting site must be considered, i.e. altitude, climate and even aspect when selecting planting stock. A damp north-westerly facing slope may suit an entirely different provenance or species than a warm dry south-easterly facing slope in a nearby location. Provenance is everything when selecting planting stock, whether it is for native woodland creation or commercial timber production. Plant buyers should be fully aware of the characteristics of their site and their requirements, and request the most suitably sourced stock from nurseries to achieve maximum productivity in the long run. Forestry Commission bulletin 124 An Ecological Site Classification for Forestry in Great Britain (2001), by G. Pyatt, D. Ray and J. Fletcher is applicable to all types of woodland. It matches key site factors including soil classification with the ecological requirements of different tree species / woodland communities, and is a useful forest planning tool for a wide range of management objectives. Where are our trees sourced from? We produce the majority of our broadleaf stock from our own seed collections in all corners of the UK; we are therefore always on the look out for new seed collectors and new sites to collect seed from. If you are interested in collecting seed, or are a landowner or manager with a native woodland or large hedgerow that may be suitable for seed collections, then please contact Kirsty Brown by emailing [email protected] or telephoning 01948 710606. Rates of pay for seed collections / landownership royalties on asking. Our coniferous stock for productive commercial forestry is propagated or raised from the very best genetic material. Seed is obtained from seed orchards or specific stands with desirable characteristics. We are keeping up to date with current thinking and keeping aware of plans for forestry in the future, and selecting our seed sources accordingly to produce stock that will achieve the maximum possible productivity throughout its lifetime, even in the face of a changing climate. Our seed sourcing extends beyond the UK and we have links with companies across Europe and North America, this network allows our seed to be sourced from the best the world has to offer for a given species. Seed Testing Station Maelor Forest Nurseries has been appointed as an official facilty to carry out statutory seed testing in Great Britain. This allows us to carry out official tests on seed for sale. View Our Seed Testing Authorisation Certificate (Authorisation extended until December 2016, pending recertification)
More Details About Larks Larks are passerine birds of the family Alaudidae. All species occur in the Old World, including northern and eastern Australia; only one, the Shore Lark, has spread to North America, where it is called the Horned Lark. Habitats vary widely, but many species live in dry regions. A group of larks is called an exaltation. A group of eagles is called a convocation A group of unicorns is called a blessing. A group of frogs is called an army. A group of rhinos is called a crash. A group of kangaroos is called a mob. A group of whales is called a pod. A group of geese is called a gaggle. A group of ravens is called a murder. A group of officers is called a mess. Other definitions of Lark ark – meadowlark: North American songbirds having a yellow breast lark – pipit: a songbird that lives mainly on the ground in open country; has streaky brown plumage lark – any of numerous predominantly Old World birds noted for their singing Hope It is useful for you.
Image in the public domain Ellis Island is the symbol of our immigrant nation. For millions of Americans, it was where their immigrant ancestors entered the United States. Immigration built our nation, immigrants peopled it, and their descendants remember their immigrant past with pride. But few Americans know what their ancestors went through. Navigating Ellis Island was the final hurdle immigrants faced before becoming American and starting their next big journey. Listen to learn about the first immigrant to enter the United States through Ellis Island. Story Length: 4:55 Socrative users can import these questions using the following code: SOC-1234 Fact, Question, Response Language Identification Organizer Deeper Meaning Chart Throughout time, the American dairy industry has been in desperate need of workers and this attracts immigrants from all over the world. This story begins in the home of an immigrant family as they start their workday. Listen to learn about the experiences of new immigrants to the United States, from Guatemala, who work on dairy farms in northern New York and Vermont. The United States is a nation of immigrants. European immigrants in the late 1800s populated our nation and were granted citizenship upon entry. The immigration system has changed dramatically since, and America’s borders are no longer open to all. Hostility towards immigrants has led to a crackdown on illegal immigration in various states. Arizona’s “Support Our Law Enforcement and Safe Neighborhood Acts” commonly known as SB 1070 was passed in 2010 and became the strictest anti-immigration measure in recent history. Listen to learn how this law has impacted Arizona and its immigrants. After Japan attacked Pearl Harbor on December 7, 1941, Americans of Japanese descent were taken away to internment camps. The terrible conditions they lived in during internment were only surpassed by the shock and humiliation the people suffered as they saw themselves changed overnight from loyal Americans, often American citizens, to “enemy aliens.” In this audio story you will hear first person accounts from people who lived in the internment camps when they were children. Children born in the U.S. to poor, undocumented immigrants face many problems. The children are American citizens, but their parents are not. Without a passport or proof of residency, those parents can’t apply for benefits for their children, and those children go without food, shelter, and other necessities. Listen to learn about the challenges facing the children of immigrants today. These levels of listening complexity can help teachers choose stories for their students. The levels do not relate to the content of the story, but to the complexity of the vocabulary, sentence structure and language in the audio story. NOTE: Listenwise stories are intended for students in grades 5-12 and for English learners with intermediate language skills or higher. These stories are easier to understand and are a good starting point for everyone. These stories have an average language challenge for students and can be scaffolded for English learners. These stories have challenging vocabulary and complex language structure.
The symbols on an ESI map are color coded and prioritized for clean up by how sensitive they are to oil. An environmental sensitivity index (ESI) map compiles information for coastal shoreline sensitivity, biological resources, and human resources. This information is used to create cleanup strategies before an accident occurs so that authorities are prepared to take action in the event of such a spill. Advance planning reduces the harmful consequences of oil spills and cleanup. ESI maps have many features that make them great tools for spill response teams. The maps use geographic information system techniques in order to combine regional maps with data on biological and human resources in an area, as well as information on sensitive shorelines. The resources are given ranks and color coded based on their sensitivity to oiling. Organizations can use the synthesized data to create efficient and effective cleanup strategies. Researchers in the Office of Response and Restoration work with state, federal, and industrial agencies to create ESI maps. For more information: Environmental Sensitivity Index Mapping (pdf, 1.6Mb)
Let’s go to the circus! Kids learn to love math when they understand how math can be used in their lives. Using word problems to connect math concepts to real experiences, this circus math printable worksheet will help connect addition to fun, real-life examples. Word problems are important because they: - Inspire higher-order thinking - Connect math concepts to real-life - Encourage creativity When literacy and numeracy skills combine, kids will strengthen multiple skills, while understanding how math can come alive in their own world. Ignite their passion for word problems with this fun circus-themed worksheet! Grades: More Worksheets
The idea of the atom — at one time a theory, but now directly observable — is the basic concept that unites all aspects of Chemistry, so this is where we begin. This lesson introduces you to these building-blocks of matter, and explains how they are characterized. The parallel concepts of the element and the atom constitute the very foundations of chemical science. Sulfur the element Sulfur the atom An element is an actual physical substance that cannot be broken down into a simpler form capable of an independent existence as observable matter. As such, the concept of the element is a macroscopic one that relates to the world that we can observe with our senses. The atom is the microscopic realization of this concept; that is, it is the actual physical particle that is unique to each chemical element. Their very small size has long prevented atoms from being observable by direct means, so their existence was not universally accepted until the late 19th Century. The fact that we still hear mention of the "atomic theory of matter" should not imply that there is now any doubt about the existence of atoms; f ew theories in the history of science have been as thoroughly validated and are as well understood. Although the word atom usually refers to a specific kind of particle (an "atom of magnesium", for example), our everyday use of "element" tends to be more general, referring not only to a substance composed of a particular type of atom ("bromine is one of the few elements that are liquids at room temperature"), but also to atoms in a collective sense ("magnesium is one of the elements having two electrons in its outer shell"). The underlying concept of atoms as the basic building blocks of matter has been around for a long time. As early as 600 BCE, the Gujarati (Indian) philosopher Acharya Kanad wrote that "Every object of creation is made of atoms which in turn connect with each other to form molecules". A couple of centuries later in 460 BCE, the Greek philosopher Democritus reasoned that if you keep breaking a piece of matter into smaller and smaller fragments, there will be some point at which the pieces cannot be made any smaller. He called these "basic matter particles"— in other words, atoms. But this was just philosophy; it would not become science until 1800 when John Dalton showed how the atomic concept followed naturally from the results of quantitative experiments based on weight measurements. The element is the fundamental unit of chemical identity. The four elements of Western alchemy The figureshows how the four elements were imagined to combine in various pairs to produce the "qualities" of hot, cold, wetness and dryness. The concept of the element is an ancient one which developed in many different civilizations in an attempt to rationalize the variety of the world and to understand the nature of change, such as that which occurs when a piece of wood rots, or is burnt to produce charcoal or ash. Most well known to us are the four elements "earth, air, fire and water" that were popularized by Greek philosophers (principally Empedocoles and Aristotle) in the period 500-400 BCE. To these, Vedic (Hindu) philosophers of India added space, while the ancient Chinese concept of Wu Xing regarded earth, metal, wood, fire and water as fundamental. These basic elements were not generally considered to exist as the actual materials we know as "earth", "water", etc., but rather represented the "principles" or essences that these elements contributed to the various kinds of matter we encounter in the world. Antoine Lavoisier (1743-1794) Eventually, practical experience (largely connected with the extraction of metals from ores) and the beginnings of scientific experimentation in the 18th Century led to our modern concept of the chemical element. "Simplest", in the context of experimentation at the time, was defined in terms of weight; cinnabar (mercuric sulfide) can be broken down into two substances, mercury and sulfur, which themselves cannot be reduced to any lighter forms. Although old Antoine got many of these right, he did manage to include a few things that don't quite fit into our modern idea of what constitutes a chemical element. There are two such mistakes in the top section of the table that you should be able to identify even if your French is less than tip-top— can you find them? Lav's other mis-assignment of the elements in the bottom section was not really his fault. Chalk, magnesia, barytes, alumina and silica are highly stable oxygen-containing compounds; the high temperatures required to break them down could not be achieved in Lavoisier's time. (Magnesia, after all, is what fire brick is made of!) The proper classification of these substances was delayed until further experimentation revealed their true nature. Ten of the chemical elements have been known since ancient times. Five more were discovered through the 17th Century. Ninety-two elements have been found in nature. Around 25 more have been made artificially, but all of these decay into lighter elements, with some of them disappearing in minutes or even seconds. The present belief is that helium and a few other very light elements were formed within about three minutes of the "big bang", and that the next 23 elements (up through iron) are formed mostly by nuclear fusion processes within stars, in which lighter nuclei combine into successively heavier elements. Elements heavier than iron cannot be formed in this way, and are produced only during the catastrophic collapse of massive stars (supernovae explosions). Quite markedly, and very differently in different bodies in the cosmos. Most of the atoms in the universe still consist of hydrogen, with helium being a distant second. On Earth, oxygen, silicon, and aluminum are most abundant. These profiles serve as useful guides for constructing models for the formation of the earth and other planetary bodies. Elemental abundances in the lithosphere (Earth's crust) and in the universe. Note that the vertical axis is logarithmic, which has the effect of greatly reducing the visual impression of the differences between the various elements. This is too big a subject to cover here in detail, especially since most elements have different names in different languages. Here are some useful links: The system of element symbols we use today was established by the Swedish chemist Jöns Jacob Berzelius in 1814. Prior to that time, graphic alchemical symbols were used, which were later modified and popularized by John Dalton (See here). Fortunately for English speakers, the symbols of most of the elements serve as mnemonics for their names, but this is not true for the seven metals known from antiquity, whose symbols are derived from their Latin names. The other exception is tungsten (a name derived from Swedish), whose symbol W reflects the German name which is more widely used. Two general organizing principles developed in the 19th Century: one was based on the increasing relative weights (atomic weights) of the elements, yielding a list that begins this way: H He Li Be B C N O F Ne Na Mg Al Si P S Cl Ar K Ca... The other principle took note of the similarities of the properties of the elements, organizing them into groups with similar properties (Döbereiner's "triads", 1829). It was later noted that groups of elements (Chancourtois' "telluric helix", Newlands "octaves", 1864 and Meyer, 1869) having changing properties tended to repeat themselves within the atomic weight sequence, giving rise to the idea of "periodic" sequences of properties. These concepts were finally integrated into the periodic table published by Mendeleev in 1869, which evolved into the various forms of the periodic table in use today. Throughout most of history the idea that matter is composed of minute particles had languished as a philosophical abstraction known as atomism, and no clear relation between these "atoms" and the chemical "elements" had been established. This began to change in the early 1800's when the development of balances that permitted reasonably precise measurements of the weight changes associated with chemical reactions ushered in a new and fruitful era of experimental chemistry. This resulted in the recognition of several laws of chemical change that laid the groundwork for the atomic theory of matter: conservation of mass, definite proportions, and multiple proportions. Recall that a "law", in the context of science, is just a relationship, discovered through experimentation, that is sufficiently well established to be regarded as beyond question for most practical purposes. Because it is the nature of scientists to question the "unquestionable", it occasionally happens that exceptions do arise, in which case the law must undergo appropriate modification. This is usually considered the most fundamental of law of nature. It is also a good example of a law that had to be modified; it was known simply as Conservation of Mass until Einstein showed that energy and mass are interchangeable. However, the older term is perfectly acceptable within the field of ordinary chemistry in which energy changes are too small to have a measurable effect on mass relations. Within the context of chemistry, conservation of mass can be thought of as "conservation of atoms". Chemical change just shuffles them around into new arrangements. Mass conservation had special significance in understanding chemical changes involving gases, which were for some time not always regarded as real matter at all. (Owing to their very small densities, carrying out actual weight measurements on gases is quite difficult to do, and was far beyond the capabilities of the early experimenters.) Thus when magnesium metal is burned in air, the weight of the solid product always exceeds that of the original metal, implying that the process is one in which the metal combines with what might have been thought to be a "weightless" component of the air, which we now know to be oxygen. More importantly, as we will see later, this experimental result tells us something very important about the mass of the oxygen atom relative to that of the magnesium atom. This law is also known as the law of constant composition. It states that the proportion by weight of the element present in any pure substance is always the same. This enables us to generalize the relationship we illustrated above. Problem Example 1 How many kilograms of metallic magnesium could theoretically be obtained by decomposing 0.400 kg of magnesium oxide into its elements? Solution: The mass ratio of Mg to O in this compound is 1/1.66 = 0.602, so 0.400 kg of the oxide contains (0.400 kg) x 0.602 = 0.241 kg of Mg. The fact that we are concerned with the reverse of the reaction cited above is irrelevant. The laws of definite and of multiple proportions, along with conservation of mass, are known collectively as the laws of chemical composition. It's easy to say this, but please make sure that you understand how it works. Nitrogen forms a very large number of oxidation products. The idea that matter is composed of tiny "atoms" of some kind had been around for at least 2000 years. Dalton's accomplishment was to identify atoms with actual chemical elements. If Nobel prizes had existed in the early 1800's, the English schoolteacher/meteorologist/chemist John Dalton (1766-1844) would certainly have won one for showing how the experimental information available at that time, as embodied in the laws of chemical change that we have just described, are fully consistent with the hypothesis that atoms are the smallest units of chemical identity. These points of Dalton's atomic theory provided satisfactory explanations of all the laws of chemical change noted above: This is really a consequence of "conservation of atoms" which are presumed to be indestructible by chemical means. In chemical reactions, the atoms are simply rearranged, but never destroyed. If compounds are made up of definite numbers of atoms, each of which has its own characteristic mass, then the relative mass of each element in a compound must always be the same. Thus the elements must always be present in a pure sample of a compound in the same proportions by mass. A given set of elements can usually form two or more compounds in which the numbers of atoms of some of the elements are different. Because these numbers must be integers (you can't have "half" an atom!), the mass of one element combined with a fixed mass of any other elements in any two such compounds can differ only by integer numbers. Thus, for the series of nitrogen-hydrogen compounds cited in the Problem Example above, we have the following relations: |Compound||Formula||weight ratio H:N||ratio to 0.0714| Although Dalton's atomic theory was immediately found to be a useful tool for organizing chemical knowledge, it was some time before it became accepted as a true representation of the world. Thus, as late as 1887, one commentator observed "Atoms are round bits of wood invented by Mr. Dalton." These wooden balls have evolved into computer-generated images derived from the atomic force microscope (AFM), an exquisitely sensitive electromechanical device in which the distance between the tip of a submicroscopic wire probe and the surface directly below it is recorded as the probe moves along a surface to which atoms are adsorbed. Dalton's atomic theory immediately led to the realization that although atoms are far too small to be studied directly, their relative masses can be estimated by observing the weights of elements that combine to form similar compounds. These weights are sometimes referred to as combining weights. There is one difficulty, however: we need to know the formulas of the compounds we are considering in order to make valid comparisons. For example, we can find the relative masses of two atoms X and Y that combine with oxygen only if we assume that the values of n in the two formulas XOn and YOn are the same. But the very relative masses we are trying to find must be known in order to determine these formulas. The way to work around this was to focus on binary (two-element) compounds that were assumed to have mostly simple atom ratios such as 1:1, 1:2, etc., and to hope that enough 1:1 compounds would be found to provide a starting point for comparing the various pairs of combining weights. Compounds of oxygen, known as oxides, played an especially important role here, partly because almost all of the chemical elements form compounds with oxygen, and most of them do have very simple formulas. Of these oxygen compounds, the one with hydrogen— ordinary water— had been extensively studied. The first proof that water is composed of hydrogen and oxygen was the discovery, in 1800, that an electric current could decompose water into these elements. Notice the 2:1 volumes of the two gases displacing the water at the tops of the tubes. Although the significance of this volume ratio was not recognized at the time, it would later prove the formula we now know for water. Earlier experiments had given the composition of water as 87.4 percent oxygen and 12.6 percent hydrogen by weight. This means that if the formula of water is assumed to be HO (as it was at the time — incorrectly), then the mass ratio of the two kinds of atoms must be O:H = 87.4/12.6 = 6.9. Later work corrected this figure to 8, but the wrong assumption about the formula of water would remain to plague chemistry for almost fifty years until studies on gas volumes (Avogadro's law) proved that water is H2O. Dalton fully acknowledged the tentative nature of weight ratios based on assumed simple formulas such as HO for water, but was nevertheless able to compile in 1810 a list of the relative weights of the atoms of some of the elements he investigated by observing weight changes in chemical reactions. Because hydrogen is the lightest element, it was assigned a relative weight of unity. Once the correct chemical formulas of more compounds became known, more precise combining-weight studies eventually led to the relative masses of the atoms we know today as the atomic weights, which we discuss farther on. The precise physical nature of atoms finally emerged from a series of elegant experiments carried out between 1895 and 1915 Ernest Rutherford's famous 1911 alpha-ray scattering experiment, which established thatThe most notable of these achievements was • Almost all of the mass of an atom is contained within a tiny (and therefore extremely dense) nucleus which carries a positive electric charge whose value identifies each element and is known as the atomic number of the element. • Almost all of the volume of an atom consists of empty space in which electrons, the fundamental carriers of negative electric charge, reside. The extremely small mass of the electron (1/1840 the mass of the hydrogen nucleus) causes it to behave as a quantum particle, which means that its location at any moment cannot be specified; the best we can do is describe its behavior in terms of the probability of its manifesting itself at any point in space. It is common (but somewhat misleading) to describe the volume of space in which the electrons of an atom have a significant probability of being found as the electron cloud. The latter has no definite outer boundary, so neither does the atom. The radius of an atom must be defined arbitrarily, such as the boundary in which the electron can be found with 95% probability. Atomic radii are typically 30-300 pm. The nucleus is itself composed of two kinds of particles. Protons are the carriers of positive electric charge in the nucleus; the proton charge is exactly the same as the electron charge, but of opposite sign. This means that in any [electrically neutral] atom, the number of protons in the nucleus (often referred to as the nuclear charge) is balanced by the same number of electrons outside the nucleus. Because the electrons of an atom are in contact with the outside world, it is possible for one or more electrons to be lost, or some new ones to be added. The resulting electrically-charged atom is called an ion. The other nuclear particle is the neutron. As its name implies, this particle carries no electrical charge. Its mass is almost the same as that of the proton. Most nuclei contain roughly equal numbers of neutrons and protons, so we can say that these two particles together account for almost all the mass of the atom. What single parameter uniquely characterizes the atom of a given element? It is not the atom's relative mass, as we will see in the section on isotopes below. It is, rather, the number of protons in the nucleus, which we call the atomic number and denote by the symbol Z. Each proton carries an electric charge of +1, so the atomic number also specifies the electric charge of the nucleus. In the neutral atom, the Z protons within the nucleus are balanced by Z electrons outside it. Moseley searched for a measurable property of each element that increases linearly with atomic number. He found this in a class of X-rays emitted by an element when it is bombarded with electrons. The frequencies of these X-rays are unique to each element, and they increase uniformly in successive elements. Mosely found that the square roots of these frequencies give a straight line when plotted against Z; this enabled him to sort the elements in order of increasing atomic number. Atomic numbers were first worked out in 1913 by Henry Moseley, a young member of Rutherford's research group in Manchester. You can think of the atomic number as a kind of serial number of an element, commencing at 1 for hydrogen and increasing by one for each successive element. The chemical name of the element and its symbol are uniquely tied to the atomic number; thus the symbol "Sr" stands for strontium, whose atoms all have Z = 38. This is just the sum of the numbers of protons and neutrons in the nucleus. It is sometimes represented by the symbol A, so A = Z + N in which Z is the atomic number and N is the neutron number. The term nuclide simply refers to any particular kind of nucleus. For example, a nucleus of atomic number 7 is a nuclide of nitrogen. Any nuclide is characterized by the pair of numbers (Z ,A). The element symbol depends on Z alone, so the symbol 26Mg is used to specify the mass-26 nuclide of manganese, whose name implies Z=12. A more explicit way of denoting a particular kind of nucleus is to add the atomic number as a subscript. Of course, this is somewhat redundant, since the symbol Mg always implies Z=12, but it is sometimes a convenience when discussing several nuclides. Because it is not always easy to display a subscript directly beneath a superscript, it is not uncommon to use constructions such as 12Mg26 , which will often be our practice in this document when it is necessary to show both Z and A explicitly. Two nuclides having the same atomic number but different mass numbers are known as isotopes. Most elements occur in nature as mixtures of isotopes, but twenty-three of them (including beryllium and fluorine, shown in the table) are monoisotopic. For example, there are three natural isotopes of magnesium: 24Mg (79% of all Mg atoms), 25Mg (10%), and 26Mg (11%); all three are present in all compounds of magnesium in about these same proportions. Approximately 290 isotopes occur in nature. The two heavy isotopes of hydrogen are especially important— so much so that they have names and symbols of their own: Deuterium accounts for only about 15 out of every one million atoms of hydrogen. Tritium, which is radioactive, is even less abundant. All the tritium on the earth is a by-product of the decay of other radioactive elements. For historical reasons, the term atomic weight has a special meaning in Chemistry; it does not refer to the actual "weight" of an atom, which would be expressed in grams or kg. Atomic weights, sometimes called relative weights, are more properly known as relative atomic masses, and being ratios, are dimensionless. Please note that although the terms mass and weight have different meanings, the differences between their values are so small as to be insignificant for most practical purposes, so the terms atomic weight and atomic mass can be used interchangeably. Atoms are of course far too small to be weighed directly; weight measurements can only be made on the massive (but unknown) numbers of atoms that are observed in chemical reactions. The early combining-weight experiments of Dalton and others established that hydrogen is the lightest of the atoms, but the crude nature of the measurements and uncertainties about the formulas of many compounds made it difficult to develop a reliable scale of the relative weights of atoms. Even the most exacting weight measurements we can make today are subject to experimental uncertainties that limit the precision to four significant figures at best. In the earlier discussion of relative weights of atoms, we explained how Dalton assigned a relative weight of unity to hydrogen, the lightest element, and used combining weights to estimate the relative weights of the others he studied. Later on, when it was recognized that more elements form simple compounds with oxygen, this element was used to define the atomic weight scale. Selecting O = 16 made it possible to retain values very close to those already assigned on the H=1 scale. Finally, in 1961, carbon became the defining element of the atomic weight scale. But because, by this time, the existence of isotopes was known, it was decided to base the scale on one particular isotope of carbon, C-12, whose relative mass is defined as exactly 12.000. Because almost 99% of all carbon atoms on the earth consist of 6C12, atomic weights of elements on the current scale are almost identical to those on the older O=16 scale. Most elements possess more than one stable isotope in proportions that are unique to each particular element. For this reason, atomic weights are really weighted averages of the relative masses of each that are found on earth. Atomic weights are the ratios of the average mass of the atoms of an element to the mass of an identical number of 6C12 atoms. You can visualize the atomic weight scale as a long line of numbers that runs from 1 to around 280. The beginning of the scale looks like this: For many elements, one particular isotope so dominates the natural mixture that the others have little effect on the average mass. For example, 99.99 percent of hydrogen atoms consist of 1H1, whereas 1H2, the other stable isotope, amounts to only 0.01 percent. Similarly, oxygen is dominated by 8O16 (over 99.7 percent) to the near exclusion of its two other isotopes. Atomic weights are listed in tables found in every chemistry textbook; you can't do much quantitative chemistry without them! The "standard" values are updated every few years as better data becomes available. You will notice that the precisions of these atomic weights, as indicated by the number of significant figures, vary considerably. A major breakthrough in Chemistry occurred in 1913 when J.J. Thompson directed a beam of ionized neon atoms through both a magnetic field and an electrostatic field. Using a photographic plate as a detector, he found that the beam split into two parts, and suggested that these showed the existence of two isotopes of neon, now known to be Ne-20 and Ne-22. This, combined with the finding made a year earlier by Wilhelm Wien that the degree of deflection of a particle in these fields is proportional to the ratio of its electric charge to its mass, opened the way to characterizing these otherwise invisible particles. Thompson’s student F.W. Aston improved the apparatus, developing the first functional mass spectrometer, and he went on to identify 220 of the 287 isotopes found in nature; this won him a Nobel prize in 1921. His work revealed that the mass numbers of all isotopes are nearly integers (that is, integer multiples of the mass number 1 of the protons and neutrons that make up the nucleus. Mass spectrometry begins with the injection of a vaporized sample into an ionization chamber where an electrical discharge causes it to become ionized. An accelerating voltage propels the ions through an electrostatic field that allows only those ions having a fixed velocity (that is, a given charge) to pass between the poles of a magnet. The magnetic field deflects the ions by an amount proportional to the charge-to-mass ratios. The separated ion beams are detected and their relative strengths are analyzed by a computer that displays the resulting mass spectrum. In modern devices, a computer also controls the accelerating voltage and electromagnet current so as to being successive ion beams into focus on the detector. Neutral atoms, having no charge, cannot be accelerated along a path so as to form a beam, nor can they be deflected. They can, however, be made to acquire electric charges by directing an electron beam at them, and this was one of the major innovations by Aston that made mass spectrometry practical. The mass spectrometer has become one of the most widely used laboratory instruments. Mass spectrometry is now mostly used to identify molecules. Ionization usually breaks a molecule up into fragments having different charge-to-mass ratios, each molecule resulting in a unique "fingerprint" of particles whose origin can be deduced by a jigsaw puzzle-like reconstruction. For many years, "mass-spec" had been limited to small molecules, but with the development of novel ways of creating ions from molecules, it has now become a major tool for analyzing materials and large biomolecules, including proteins. Only 26 of the elements that occur on the Earth exist as a single isotope; these are said to be monoisotopic. The remaining elements consist of mixtures of between two and ten isotopes. The total number of natural isotopes is 339; of these, 254 are stable, while the remainder are radioactive, meaning that they decay into stable isotopes. Recalling that a given isotope (also known as a nuclide) is composed of protons and neutrons, each having a mass number of unity, it should be apparent that the mass number of a given nuclide will be an integer, as seen in the mass spectrum of magnesium above. It also follows that the relative atomic masses (“atomic weights”) of monoisotopic elements will be very close to integers, while those of other elements, being weighted averages, can have any value. When there are only two significantly abundant isotopes, you can estimate the relative abundances from the mass numbers and the average atomic weight. The following is a favorite exam problem: The chemical behavior of an element is governed by the number and arrangement of its electrons in relation to its nuclear charge (atomic number). Because these quantities are identical for all isotopes of a given element, they are generally considered to exhibit identical chemical properties. However, it turns out that the mass differences between different isotopes can give rise to very slight differences in their physical behavior that can, in turn affect their chemical behavior as well. These isotope effects are most evident in the lighter elements, in which small differences in neutron number lead to greater differences in atomic mass. Thus no element is more subject to isotope effects than hydrogen: an atom of "heavy hydrogen" 1H2 (also known as deuterium and often given the symbol D) has twice the mass of an atom of 1H2. When this isotope is combined with oxygen, the resulting "heavy water" D2O exhibits noticeably different physical and chemical properties: it melts at 3.8° C and boils at 101.4° C. D2O apparently interferes with cell division in organisms; mammals given only heavy water typically die in about a week. When two or more elements whose atoms contain multiple isotopes are present in a molecule, numerous isotopic modifications become possible. For example, the two stable isotopes of hydrogen and of oxygen (O16 and O18) give rise to combinations such as H2O18, HDO16, etc., all of which are readily identifiable in the infrared spectra of water vapor. The amount of the rare isotopes of oxygen and hydrogen in water varies enough from place to place that it is now possible to determine the age and source of a particular water sample with some precision. These differences are reflected in the H and O isotopic profiles of organisms. Thus the isotopic analysis of human hair can be a useful tool for crime investigations and anthropology research. See also this Microbe Forensics page, and this general resource on water isotopes. Isotope effects manifest themselves in both physical and chemical changes. In general, These two effects give rise to isotopic fractionation as chemical substances move through the environment — or on a much smaller scale, through the various metabolic processes that occur in organisms. Over time, this leads to changes in the isotopic signatures of elements in different realms of the world that can reveal information that would otherwise be hidden. The degree of isotopic fractionation depends on the temperature. This fact has been put to practical use to estimate the global average temperature in past times by measuring the degree of enrichment of the heavier isotopes of oxygen in glacial ice cores and also in ancient sediments containing the shells of microorganisms. Molecules are composed of atoms, so a molecular weight is just the sum of the atomic weights of the elements it contains. Because some solids are not made up of discrete molecules (sodium chloride, NaCl, and silica, SiO2 are common examples), the term formula weight is often used in place of molecular weight. In general, the terms molecular weight and formula weight are interchangeable. Here again is the beginning of the atomic weight scale that you saw above: You understand by now that atomic weights are relative weights, based on a scale defined by 6C12 = 12. But what is the absolute weight of an atom, expressed in grams or kilograms? In other words, what actual mass does each unit on the atomic weight scale represent? The answer is 1.66053886 × 10–27 kg. This quantity (whose value you do not need to memorize) is known as the unified atomic mass unit, denoted by the abbreviation u. (Some older texts leave off the "unified" part, and call it the amu. Why such a hard-to-remember number? Well, that's just how Nature sometimes does things. Fortunately, you don't need to memorize this value, because you can easily calculate its value from Avogadro's number, NA, which you are expected to know: 1 u = 1/NA gram = 1 ÷ (1000 NA) Kg ... but more about that in the later lesson on moles. Atoms are composed of protons, neutrons, and electrons, whose properties are shown below: |particle||mass, g||mass, u||charge||symbol| |electron||9.1093897 × 10–28||5.48579903 × 10–4||1–||–10e| |proton||1.6726231 × 10–24||1.007276470||1+||11H+, 11p| |neutron||1.6749286 × 10–24||1.008664904||0||01n| Two very important points you should note from this table: As we mentioned in one of the problem examples above, the mass of a nucleus is always slightly different from the masses of the nucleons (protons and neutrons) of which it is composed. The difference, known as the mass defect, is related to the energy associated with the formation of the nucleus through Einstein's famous formula e = mc2. This is the one instance in chemistry in which conservation of mass-energy, rather than of mass alone, must be taken into account. But there is no need for you to be concerned with this in this part of the course.
Wetlands serve as some of the most complex and important ecosystems on Earth, providing habitat for many plants and animals, collecting and filtering water, and reducing the amount of damage from floods and heavy rainfall. Wetlands go by many alternative names, including swamps, marshes and bogs. They vary slightly in physical composition; some wetlands contain primarily trees, while others contain brush and shrubs, but all perform equally important ecological roles.Continue Reading In the United States, wetlands serve as homes and form critical habitats for nearly 50 percent of the nation's federally endangered plant and animal species. Fish, birds, amphibians and many types of plants require the resources found only in wetlands for survival. In addition to protecting threatened and endangered species, wetlands control flooding by acting as large sponges, filling with excess water when heavy rains fall. A single acre of wetlands might store over 1 million gallons of water, making them ideal natural sources of flood control. The trees and brush in wetlands anchor soil and vegetation to the ground, which ultimately reduces wind and soil erosion. Wetlands provide humans with recreational opportunities and offer economic benefits as well. They supply ample amounts of shellfish along with cranberries, blueberries and wild rice. Waterfowl hunting within wetland boundaries proves lucrative, and some medicines derive from ingredients found only in wetlands.Learn more about Earth Science
|Part of the series on| Homo neanderthalensis is the first fossil humanoid to be identified as such, and the best known, named after remains found in the Neander Valley in western Germany in 1856. Homo neanderthalensis is found throughout Europe, the Near East, and the remainder of the Old World. Neanderthals existed in variant forms, during the late Middle and Upper Pleistocene, circa 80,000 to 30,000 years ago. Within western Europe the remains are associated with the Middle Paleolithic Mousterian stone tool industries, which disappeared with the arrival of Cro-magnon man (early modern Humans.). A Neanderthal was a fully erect biped of stocky build, with a long low skull, prominent brow ridges and occiputs, and a jutting face. Neanderthals were on average significantly more muscular than H. sapiens and lacked a chin. They were social beings living in small tribes. Like the H. sapiens of their time, Neanderthals also possessed the skull features commonly considered to be required for speech. The popular impression of him as a stooping brute is incorrect and derives from the original poor reconstruction in the Neander valley. It has also been suggested that the first individual found suffered from vitamin D deficiency (rickets) or syphilis. Neanderthals were not considered to be a direct ancestor of modern humans, as they were an evolutionary dead end. However, a recent genome sequencing study suggests that Neanderthals may have interbred somewhat with H. sapiens, with the effect that modern Eurasian and North African populations, but not sub-Saharan African populations, have between 1 and 4% Neanderthal genes. Whether Neanderthals did interbreed with H. sapiens, or if this was even possible, remains slightly speculative. So far the small amounts of Neanderthal DNA found may not suggest a recent genetic link between the species, according to some scientists, and the trend in current research appears to be continuing in this direction Other scientists of course disagree. H. neanderthalensis also had an average brain size of 1,450 cc with a range from 1,125cc to 1,750cc. The average modern H. sapiens brain size today is 1,330cc. Presumably Neanderthals needed this extra brain mass to control their large muscle mass. Many Neanderthal fossils have been recovered, showing massive amounts of wear on the teeth, which to many physical anthropologists suggests that the teeth were regularly used for gripping skins during stretching and working. Neanderthals may have been about as intelligent as an average human. They were social beings living in small tribes, like the humans of their time, although studies suggest that Neanderthal tribes interacted less with each other than human tribes did. Tool Use/technology - Fire - Neanderthals had the ability to use fire, and for food they relied on hunting. They were predominantly carnivorous. - Stone working - Neanderthals used the soft hammer percussion method for chipping stones. They did have one tool, a curved bladed hand tool that was exceptionally complex to cut.. - Weapons - It is largely accepted that Neanderthal never invented projectile weapons, but relied on spears with limited range even when facing large animals. However, two recent finds have lead to a suggestion that the choice of weapon was not due to lack of technology, but effectiveness of the weapon. Their robust bodies enabled them to use this hunting style, which was considered far too dangerous by humans..A common hunting technique was to drive prey animals off a cliff, or corner them and finish them off with spears, much as today's Pygmies hunt elephants. - Clothing - Neanderthal had the technology to lace furs and skins together for clothing. However, there is no evidence they possessed needles. - Medicine - Evidence of care for the elderly and the sick has been found. They also used medicinal plants. There is evidence that Neanderthals performed burial rituals, suggesting some sort of religion. However, those rituals are nowhere near as elaborate as their modern human counterparts. Interestingly, archaeologists have never found Neanderthal cave drawings, although handcrafted art has been found. Whether or not Neanderthals had language is debated. It is nearly impossible to prove one way or the other, because there would be no record of language until written languages come into use. Circumstantial support includes: the similarity of the FoxP2 gene to modern human's FoxP2, the complexity of tools which would require long training sessions to obtain and improve the techniques, existence of the hyoid bone, as well as an enlarged hypoglossal canal, which supports the nerve that controls the tongue. Neanderthal man is thought to have developed from Homo erectus or Homo heidelbergensis, though the widespread distribution of intermediate form hinders an attempt to resolve any single geographical locality as the place of development. The fate of Neanderthals is equally hard to determine. We know they went extinct between 28,000 and 24,000 years ago, but we don't know how or why. Many theories have been presented, of which the most common are: - Climate changes. - Competition with H. sapiens over resources. - Their reliance on meat. - Assimilation into the larger human population. - An interspecies war between H. sapiens and Neanderthals. It is likely that a combination of at least some of these factors led to the extinction of the Neanderthals. Creationist beliefs Creationists believe that Neanderthals were fully human beings (along with Homo erectus, Homo heidelbergensis, etc.). When Neanderthal fossils were first discovered in 1859, the creationists of the time replied that they were just ordinary human beings. For a time that was believed to be the case, with Neanderthals classed as a subspecies of H. sapiens, but in recent years they have been classed as their own species. At least one creationist believes the Neanderthals were the Nephilim described in Genesis 6:4. Like many other creationist claims, this relies on serious fudging of the data surrounding carbon dating. Symbolic behavior Until fairly recently, anthropologists and archaeologists dismissed any claims that Neanderthals may have possessed the capacity for symbolic behavior. However, Sarah Milliken used evidence that Neanderthals altered their living space, plus certain findings of Neanderthal art in Italy, to question such assumptions. Another strong supporter of claims to Neanderthal symbolism is João Zilhãao, whose 2010 study of perforated shells in Iberia suggests a certain degree of modern behavior among Neanderthal groups. Recent findings of Neanderthal sleeping sites have also contributed towards this trend. See also - Homo neanderthalensis EvoWiki article - Neanderthal large eyes 'caused their demise' This is possible but no necessarily likely. - ↑ Thieme, Hartmut "Lower to Middle Paleolithic Hunting Spears, and Lithic Tool Traditions." Archaeology 13, 2003 - ↑ Neanderthal Myths Neanderthal, Channel 4 - ↑ Richard E. Green et al. "A Draft Sequence of the Neandertal Genome" [sic]. Science journal, May 2010. - ↑ Odd man out: Neanderthals and modern Humans British Archaeology - ↑ Discovery News - ↑ http://www.nytimes.com/2010/05/07/science/07neanderthal.html - ↑ The Dawn of Human Culture Richard Klein, New York, John Wiley and Sons, 2002 - ↑ Not make, mind you. According to A Timetable of Inventions and Discoveries by Kevin Desmond, humans only found how to make fire 14 000 years ago. A more recent source, 1000 Inventions and Discoveries by Roger Bridgman and the Smithsonian, list the secret of making fire as being found a mere 9 000 years ago. - ↑ Living like a NeanderthalNeanderthal, Channel 4 - ↑ Hard bone on stone, rather than stone on stone techniques - ↑ This tool was discovered years ago, but the technique to form it has only been studied in the last 10 years, after the image of Neanderthal as an intelligent being became more accepted.http://www.pbs.org/wgbh/nova/evolution/defy-stereotypes.html - ↑ The first was a spear head, slightly smaller in size than the norm, embedded into a bone far more deeply than any others, and the second is a stash of what appear to be the shafts of projectiles found in Schoningen, Germany - ↑ http://www.pbs.org/wgbh/nova/evolution/defy-stereotypes.html Nova - ↑ Neanderthals on Trial University of Minnesota, Duluth - ↑ Early Man Andy Simmons - ↑ Neanderthal 'face' found in Loire BBC News - ↑ The importance of FoxP2 on language development as been intensively studied for over 15 years - ↑ Neanderthals' 'last rock refuge' BBC News - ↑ Climate Change Pushed Neanderthal Into Extinction In Iberian Peninsula GeneticArchaeology.com - ↑ Did Use of Free Trade Cause Neanderthal Extinction? Newswise - ↑ Meaty appetites may have caused Neanderthal extinction Science & Spirit - ↑ The assimilation model, modern human origins in Europe and the extinction of Neanderthals Fred H. Smith, Ivor Jankovic, Ivor Karavanic - ↑ Odd man out: Neanderthals and modern Humans British Archaelogy - ↑ Answers in Genesis - ↑ http://www.bric.uk.com/rp.no38.html - ↑ The Cryptid Zoo:Neanderthals and Neanderthaloids - ↑ Milliken, S. (2007) Neanderthals, anatomically modern humans, and modern human behaviour in Italy - ↑ Zilhao, J. (2010)Symbolic use of marine shells and mineral pigments by Iberian Neandertals - ↑ http://averyremoteperiodindeed.blogspot.com/2010/01/neanderthal-wooden-structures-sleeping.html - ↑ http://news.discovery.com/archaeology/neanderthal-bedroom-house.html
Blood Test: Immunoglobulin A (IgA) What It Is An IgA test measures the blood level of immunoglobulin A, one of the most common antibodies in the body. Antibodies are proteins made by the immune system to fight bacteria, viruses, and toxins. IgA is found in high concentrations in the body’s mucous membranes, particularly the respiratory passages and gastrointestinal tract, as well as in saliva and tears. IgA also plays a role in allergic reactions. IgA levels also may be high in autoimmune conditions, disorders in which the body mistakenly makes antibodies against healthy tissues. Why It’s Done An IgA test can help doctors diagnose problems with the immune system, intestines, and kidneys. It may be done in kids who have recurrent infections. It’s also used to evaluate autoimmune conditions, such as rheumatoid arthritis, lupus, and celiac disease. Kids born with low levels of IgA — or none at all — are at increased risk of developing an autoimmune condition, infections, asthma, and allergies. Your doctor will tell you if any special preparations are required before this test. On the day of the test, having your child wear a T-shirt or short-sleeved shirt can make things easier for your child and the technician who will be drawing the blood. A health professional will usually draw the blood from a vein. For an infant, the blood may be obtained by puncturing the heel with a small needle (lancet). If the blood is being drawn from a vein, the skin surface is cleaned with antiseptic, and an elastic band (tourniquet) is placed around the upper arm to apply pressure and cause the veins to swell with blood. A needle is inserted into a vein (usually in the arm inside of the elbow or on the back of the hand) and blood is withdrawn and collected in a vial or syringe. After the procedure, the elastic band is removed. Once the blood has been collected, the needle is removed and the area is covered with cotton or a bandage to stop the bleeding. Collecting blood for this test will only take a few minutes. What to Expect Either method (heel or vein withdrawal) of collecting a sample of blood is only temporarily uncomfortable and can feel like a quick pinprick. Afterward, there may be some mild bruising, which should go away in a few days. Getting the Results The blood sample will be processed by a machine. The results are commonly available within a day or two. If results suggest an abnormality, the doctor may perform further tests. This test is considered a safe procedure. However, as with many medical tests, some problems can occur with having blood drawn, like: - fainting or feeling lightheaded - hematoma (blood accumulating under the skin causing a lump or bruise) - pain associated with multiple punctures to locate a vein Helping Your Child Having a blood test is relatively painless. Still, many children are afraid of needles. Explaining the test in terms your child might understand can help ease some of the fear. Allow your child to ask the technician any questions he or she might have. Tell your child to try to relax during the procedure, as tense muscles can make it harder and more painful to draw blood. It also may help for your child to look away when the needle is being inserted into the skin. If You Have Questions If you have questions about the IgA test, speak with your doctor. Reviewed by: Yamini Durani, MD Date reviewed: July 2014
This week's element is xenon, a noble gas (or inert gas) with the symbol, Xe, and the atomic number, 54. Xenon is a clear and colorless, and odorless gas that is quite heavy. Xenon gas is 4.5 times heavier than Earth's atmosphere (which consists of a mixture of a number of gaseous elements and compounds). This element's mass comes from its nucleus, which contains 54 protons and a varying (but similar) number of neutrons. Xenon has 17 naturally-occurring isotopes (the most for any element), eight of which are stable, the most for any element, except tin, which has ten. Tiny amounts of two xenon isotopes, xenon-133 and xenon-135, leak from nuclear reprocessing and power plants, but are released in higher amounts after a nuclear explosion of accident, such as what occurred at Fukushima. Thus, monitoring xenon's isotopes can ensure compliance with international nuclear test-ban treaties and also to detect whether rogue nations are testing their own nuclear weapons. Xenon was discovered in 1898 in England by the Scottish chemist William Ramsay and English chemist Morris Travers. By examining the spectra emitted by the residue left over after evaporating components of liquid air, they realised they'd discovered another new element. Xenon is rare on Earth, consisting of as little as 1 part in 20 million in Earth's atmosphere. Xenon is used in a number of practical ways. It is probably most familiar because it is used in photographic flash bulbs, in high pressure short-arc lamps for IMAX film projectors (these lamps are explosive, so they require special care when being replaced), and in high pressure arc lamps to produce "safe" ultraviolet light for tanning beds and to sterilise things, such as benchtops in labs. Xenon is also used as a general anaesthetic and in medical imaging. But in my opinion, xenon's most interesting use is as an ion thruster for space travel. NASA designed a Xenon Ion Drive engine that works by firing a beam of high-energy ions at very high speeds and with high efficiency. For example, the Deep Space 1 (DS1) probe shoots out ions at 146, 000 kilometers per hour (more than 88,000 mph). DS1 is probably most memorable for its two flyby encounters with Comet Borrelly in 2001. Several interesting traits of solid xenon appear when it is subjected to pressures equivalent to 1.3 million times Earth's atmospheric pressure; it turns bright blue and takes on the chemical properties of a metal. Xenon is not toxic but many of its compounds are, as the result of their strong oxidizing properties. Waitaminnit, you say. GrrlScientist just said "chemical compounds". Xenon's inert, so what's she on about? It's true: xenon can, under unusual conditions, form compounds with a few other elements. In fact, xenon was the first of the noble gases to form a chemical compound under the guidance and observation of a human. This experiment, originally conceived of by chemist Neil Bartlett and performed in 1962, showed that xenon could be oxidized by another gas, platinum hexafluoride (PtF6) to form a solid yellow compound, xenon hexafluoroplatinate. This seminal experiment forever changed how chemists think about the noble gases and launched a new research field in chemistry. Here's our favourite chemistry professor telling us more about Neil Bartlett, about this particular experiment, and about xenon in general: .. .. .. .. .. .. .. .. .. .. .. .. Video journalist Brady Haran is the man with the camera and the University of Nottingham is the place with the chemists. You can follow Brady on twitter @periodicvideos and the University of Nottingham on twitter @UniNottingham You've already met these elements: Iodine: I, atomic number 53 Tellurium: Te, atomic number 52 Antimony: Sb, atomic number 51 Tin: Sn, atomic number 50 Indium: In, atomic number 49 Cadmium: Cd, atomic number 48 Silver: Ag, atomic number 47 Palladium: Pd, atomic number 46 Rhodium: Rh, atomic number 45 Ruthenium: Ru, atomic number 44 Technetium: Tc, atomic number 43 Molybdenum: Mo, atomic number 42 Niobium: Ni, atomic number 41 Zirconium: Zr, atomic number 40 Yttrium: Y, atomic number 39 Strontium: Sr, atomic number 38 Rubidium: Rr, atomic number 37 Krypton: Kr, atomic number 36 Bromine: Br, atomic number 35 Selenium: Se, atomic number 34 Arsenic: As, atomic number 33 Germanium: Ge, atomic number 32 Gallium: Ga, atomic number 31 Zinc: Zn, atomic number 30 Copper: Cu, atomic number 29 Nickel: Ni, atomic number 28 Cobalt: Co, atomic number 27 Iron: Fe, atomic number 26 Manganese: Mn, atomic number 25 Chromium: Cr, atomic number 24 Vanadium: V, atomic number 23 Titanium: Ti, atomic number 22 Scandium: Sc, atomic number 21 Calcium: Ca, atomic number 20 Potassium: K, atomic number 19 Argon: Ar, atomic number 18 Chlorine: Cl, atomic number 17 Sulfur: S, atomic number 16 Phosphorus: P, atomic number 15 Silicon: Si, atomic number 14 Aluminium: Al, atomic number 13 Magnesium: Mg, atomic number 12 Sodium: Na, atomic number 11 Neon: Ne, atomic number 10 Fluorine: F, atomic number 9 Oxygen: O, atomic number 8 Nitrogen: N, atomic number 7 Carbon: C, atomic number 6 Boron: B, atomic number 5 Beryllium: Be, atomic number 4 Lithium: Li, atomic number 3 Helium: He, atomic number 2 Hydrogen: H, atomic number 1 Here's the Royal Society of Chemistry's interactive Periodic Table of the Elements that is just really really fun to play with! .. .. .. .. .. .. .. .. .. .. .. ..
Wikijunior:Languages/Mandarin Chinese< Wikijunior:Languages What writing system(s) does this language use?Edit All Sinitic languages and dialects, including Mandarin are written with hànzì, a picture-like writing system. However, many English-speaking students learn to pronounce Chinese (or "zhōngwén") using a Romanization system called Pinyin. Read on for some examples. So how do characters work? Does Chinese have an alphabet? No, Chinese does not have an alphabet. It does use radicals, however. Characters in Chinese are basically the "pictures" Chinese people use to read and write, and are written with strokes, or different lines. There are three main types of characters: pictographic, ideographic, and picto-phonetic. The words "pictographic characters" mean just what they sound like, they are characters that try to represent a thing or action as a picture. For example, the character for sun (日, pronounced like "rurr") was, in ancient times, a circle with a dot in the center, an attempt to draw a sun. However, characters change over time. The modern character is a rectangle divided in half by a horizontal line, and takes 4 stroke to write. Ideographic characters are used for things that are a bit more difficult to describe than with just a drawing. Love, hate, anger, happiness, goodness—all of these concepts are very hard to capture in a simple picture. Ideographic characters try to address this problem by combining different pictures to convey meaning. For example, the Chinese character for goodness, 好 (“hǎo”), is depicted using two separate characters, a woman (女) and a child (子), combined into one character. Picto-phonetic characters combine a meaning radical (which hints at character's meaning) with a sound radical (which hints at the character's pronunciation). "Grass" (草), for example, is written as the character for "early" (早, which sounds like "grass" in Chinese) with a radical meaning "grass" (艹) above. The reader can look at the grass radical and guess or recall the meaning while looking at the sound radical and guess or recall how it is pronounced. For people who speak Chinese, radicals are like an alphabet. Not all radicals are related to pronunciation, but radicals always show the meaning of a word. Radicals, like an alphabet, allow people to reuse pieces of Chinese. And since the language has some 10,000 plus characters in use, radicals become very useful for fast memorization of characters. Characters will get some of their meaning and/or sound from a radical (like picto-phonetic characters). You can imagine radicals as a foundation, or base, of the Chinese written language. Radicals are kind of like the different symbols used in street signs. A "no smoking sign" is a cigarette that is crossed out, a "no dogs allowed" sign has a dog that is crossed out. We can reuse the meaning of the crossed out symbol to create new signs and guess at the meaning of new signs we have never seen before. In the same way Chinese characters that have to do with children may have the radical for "child" in them, and characters that have to do with actions or things done with the hand may have the radical for "hand", while the rest of the character hints at pronunciation. Are there different ways of writing Chinese? Yes, there are two ways of writing Chinese, simplified and traditional. Simplified was invented by the government of mainland China to increase the number of people who can read in China—as you can guess, it's simpler. Traditional is the “old” way of writing Chinese. It is still used in places like Taiwan, Hong Kong, and Macao. It is also used in ancient texts, paintings, genealogical charts, food packaging, and more! If you want to live in China, it is handy to know both simplified and traditional, because you are likely to run across both, but if you know one system, you can, with some effort, read the other. How many people speak this language?Edit Mandarin Chinese is the most commonly spoken mother tongue in the world. In fact, over 800 million people speak dialects of this form of Chinese. That's more than one out of every seven people! The only thing is that most of them live in or near China; Chinese is not very widespread. Still, knowing Chinese will allow you to communicate with many people. There are also many other closely related languages, sometimes called dialects, such as Minnan (including Taiwanese), Wu (including Shanghainese), Hakka and Cantonese. Where is this language spoken?Edit Mandarin Chinese is mostly spoken is the People's Republic of China (including Hong Kong and Macau) and Taiwan. It is also one of the four official languages of Singapore (together with English, Malay, and Tamil), and is also spoken among the people of Chinese ancestry in Malaysia. What is the history of this language?Edit China has a history of five thousand years of continuous civilization, so it is probable that the Chinese language is at least as old as this. Archeologists have found Chinese pictographic writing on pottery, bones and turtle shells from as long ago as the Shang dynasty, over 3000 years ago. By the time of the Qin dynasty, 2000 years ago, Chinese writing had been standardized and it has changed very little since then. Because Chinese is not an alphabetic language, it is hard to know exactly what the language sounded like in the distant past. Still, historians and linguists have worked hard to reconstruct what older forms of Chinese might have sounded like. There are some old books written to show people which characters rhymed or sounded alike. We can also look at all the different dialects of Chinese and see which things are similar and which are different, then guess at which words or sounds might be older. There are now five main spoken dialects of Chinese including Mandarin, Wu dialect, Min dialect, and Yue. These are as different from each other as English and German and could be thought of as separate languages - but speakers of all the dialects use the same writing system. Poets and Ci authors (in order of fame): Authors (in chronological order of birth): 孔子Confucius (most influential philosopher in Korean, Chinese and Japanese societies) 陸機Lu, Ji (author of "On Literature," a piece of literature criticism) 劉勰Liu, Xie (author of "Carving of a Dragon by a Literary Mind," a piece on literature aesthetics) 陳獨秀Chen, Duxiu (one of the main promoters of modern written Chinese language) 魯迅Lu, Xun (one of the most influential writers of the 20th century) 胡適Hu, Shi (one of the main promoters of modern written Chinese language) What are some basic words in this language that I can learn?Edit The order: traditional characters, then simplified, then the English translation. - 你好!- Nǐ hǎo! - "Hello!" - 再見!/ 再见!- Zàijiàn! - "Good bye!" - 明天見!/ 明天见!- Míngtiān jiàn! - "See you tomorrow!" - 我的名字是大卫。/ 我的名字是大卫。- Wǒ de míngzì shì dà wèi. - "My name is David." - 我叫大卫。/ 我叫大卫。- Wǒ jiào dà wèi. - "I'm called David." - 很高興認識你。/ 很高兴认识你。- Hěn gāoxìng rènshi nǐ. - Nice to meet you. - 我可不可以 - Wǒ kěbù kěyǐ - "Can I..." - 請您/请您 - Qǐng nín - "Please..." or "Could you..." - 謝謝/谢谢 - Xièxiè - "Thank you." - 不客氣/不客气 - Bù kèqi - "You're welcome." - 對不起/对不起 - Duìbuqǐ - "Sorry." or "Excuse me." - 真對不起/真对不起 - Zhēn duìbuqǐ - "I'm very sorry." - 沒關係/没关系 - Méiguānxi - "No problem." or "It doesn't matter." or "Never mind." Listen to Chinese! Interested in hearing Chinese? Check out xuezhongwen.net; it has great aural coverage of the language along with examples of both Pinyin and simplified/traditional characters. What is a simple song/poem/story that I can learn in this language?Edit |Simplified characters||Traditional characters||Pronunciation||English| Dà tóu dà tóu Big head, big head Lion-Eating Poet in the Stone DenEdit |Simplified characters||Traditional characters||Pronunciation||English| « Shī shì chī shī zǐ jì » Yǒuyī wèi zhù zài shíshì lǐ de shīrén jiào shī shì, ài chī shīzi, juéxīn yào chī shí zhǐ shīzi. « Lion-Eating Poet in the Stone Den » In a stone den was a poet called Shi, who was a lion addict, and had resolved to eat ten lions. Introduction • Glossary • Authors and Contributing • Print Version
10 Heads and 90 Tails On a table in front of you there are a hundred coins with 10 bearing heads and the other 90 showing tails. The light is turned off, putting you in pitch blackness and the challenge is to arrange all of the coins into two groups so that both groups show the same number of heads when the light is turned on again. You are allowed to flip over any coins that you choose to, but you are unable to check their state by touch alone. The trick to realise here is that the two groups don't have to be the same size. Instead let's split them into a big pile of 90 coins and a small pile of 10 coins. We don't know how many heads are in either pile, it couĺd be anywhere from 0 to 10. Let's say there are n heads in the large pile. That leaves 10-n heads in the small pile. If we turn over all of the coins in the small pile we get 10-(10-n) heads which simplifies to n. Therefore both piles will have the same number of heads as each other, even though we don't know how many (if any) that is.
Alan Turing, the so-called ‘father of modern computing’, was born 97 years ago today. Turing, who hid his homosexuality for much of his life, is probably best known for his contribution to breaking German codes at Bletchley Park during the Second World War. Fellow codebreaker Jack Good said of him: ‘I won’t say that what Turing did made us win the war but I daresay we might have lost it without him.’ During Turing’s lifetime being gay was still illegal and officially considered a mental illness. For this reason Turing was forced to keep his homosexuality a secret, until he was publicly outed in 1952. Turing was charged with gross indecency under the Criminal Law Amendment Act (the same crime which saw Oscar Wilde imprisoned in 1895) in 1952. The mathematician was punished with chemical castration, through oestrogen injections aimed at lowering his libido. The ‘treatment’ lasted for a year and Turing was found dead in 1954, just one year later. Speculations over the cause of death abound. The coroner in the case recorded it as suicide, as Turing had died of cyanide poisoning. However, members of his family claimed that the poisoning could have been due to a accidental mixing of chemicals during an amateur experiment. As well as his work with Hut 8 in breaking the German Enigma machine, Turing was also a pioneer in the field of algorithms, which formed the basis of modern computing. The Pilot Ace or Automatic Computing Engine, which was recently named one of The Science Museum’s ‘Century Icons’, was the earliest postwar attempt to create an electronic computer in Britain that built upon Turing ‘s ‘Universal Turing Machine’. The Turing Machine was an early example of the computer model, with specific ‘inputs’ or algorithms being processed to produce a specific ‘output’. The concept was expanded with the Universal Turing Machine, which was a theoretical machine also designed by Turing, which could carry out any task, as long as each formula or algorithm was presented as a series of instructions.
The 1947 Partition is undoubtedly one of the most momentous events in history, not only in terms of administrative dismantling, but it also resulted in large scale displacements and the mass killings of people. It was a holocaust and one of the largest forced movements of people. We cannot minimize Partition’s legacy. Wars were fought because of this decision. One land was split into two nations, and then three in 1971. Many of today’s problems in the South Asian subcontinent are rooted in Partition. The importance of recording witness stories of Partition cannot be overstated. These are sources of human experience in a time of dramatic crisis and great loss and instability. The Urdu short story writer Saadat Hasan Manto and many others have written about it. Individual memories of displacement, the experience of violence, and the loss of kith and kin are extremely important to document. It was a holocaust, yet it has not had a great deal of documentation. One would be surprised at the few memorials of Partition that exist today. It is critically important to bring the human dimension into Partition. People, depending on where they lived, their religious background, and numerous other conditions had to respond to this event in different ways. Many people found themselves on the wrong side of the border and were forced to migrate. As we know, the line of Partition was drawn in great haste, and thus suddenly, people of the minority religious community found themselves under attack by members of the majority. Families were divided. The events of Partition have permanently scarred the psyche of the subcontinent. There is no continuing problem that doesn’t hark back to Partition whether it is environmental, political, or communitarian. One has to know that Partition impacted people differently based on region, class, and also the urban/rural divide. Punjab and Bengal also experienced Partition differently. For Punjab, the movement of people was much more sudden and the changes were dramatic. For Bengal, the movement was more staggered, and change manifested over time. All of these experiences are extremely important to record. One is left pondering over the human tragedy that was incurred by this largely arbitrary decision. The impact of that decision, which was made on a whim by the leaders to divide the country, tends to get normalized or underplayed in the histories that are taught to us. These human stories show us that the impact of this decision cannot be overlooked. Through these individual stories, we learn of certain narratives that go against the idea of the usual communal divisions, such as stories of friends and neighbors who helped each other, despite their religious identities. I can’t say if these stories can tell us why Partition happened the way it did, or if they can give us answers, however, it is certain that the more you read the personal narratives, the more it becomes apparent that we need to hear these stories, as they are important to our understanding of Partition. We cannot consider the nationalist histories of post-colonial nation-states as our only sources of understanding. I hope that more narratives are exchanged across the great divide that will bring about empathy, rather than erecting walls of separation. These personal, apolitical narratives have the potential to initiate a healing process, which the subcontinent sorely needs. Dr. Ayesha Jalal is a Pakistani-American historian, and a professor of history at Tufts University.
In Texas, teachers are expected to teach about volcanic activity, as a destructive and constructive force for change, as part of a system, as a catastrophic event, and as a source for igneous rocks. 5.12 B The student is expected to describe processes responsible for the formation of coal, oil, gas, and minerals 6.6 C The student is expected to identify forces that shape features of the Earth including uplifting, movement of water, and volcanic activity 7.5 A The student is expected to describe how systems may reach an equilibrium such as when a volcano erupts 7.14 A The student is expected to describe and predict the impact of different catastrophic events on the Earth 8.14 B The student is expected to analyze how natural or human events may have contributed to the extinction of some species Students need to understand the different states of matter and that rock can exist as a liquid, before they can understand volcanism.
Welcome to Preschool Math, a series that will highlight articles on developing patterning, sorting and number sense with your preschooler! Read the patterning article and the sorting article—both important early math skills! We all know counting is an important skill to work on with young children. But there is so much more to counting and number concepts than simply teaching your child to say, “1, 2, 3″! Counting means naming 1, 2, 3, and so on, for each object being counted. I’ve seen many preschoolers who get confused when we count objects and compare quantity because they’ve simply been taught to say numbers from memory, not connecting the concept of counting actual objects. However, that doesn’t mean you shouldn’t be counting with your children before they fully understand it! On the contrary–you can make counting and numbers a part of your life every day with meaning. Once you begin counting objects and comparing quantities, your child will quickly catch on and you’ll realize that you don’t have to have any special toys or training to help your child become a counting pro! Start by building your home library with some fun counting books. Don’t worry about complexity when looking for beginning counting books! One of our favorites at my house is Eric Carle’s 1, 2, 3 To the Zoo. This book keeps each page simple, with only the printed number and that number of animals on the page. While reading it, I point to the number, naming it, then point to each animal as we count it. Here are some simple ways to incorporate counting into everything you do: - Count, count, count. Count everything you do. - Count the stairs as you walk up and down them. - Count fingers as you wash hands. - Count crackers as you place them in front of your child for a snack. - Count shoes as you put them on. They key here is you’re counting items! When your child counts objects, have your child touch, or even pick up to place in a line, each item as it’s counted. This helps a child with only naming one number for each object and helps with counting in order. Once your child begins counting and identifying numbers, there are some great ways to make learning fun. An important pre-kindergarten number concept skill is to be able to match a written number with the quantity represented by that number. For example, matching five balls with the number five. I’ve developed some DIY, very easy to make, games to practice this skill. In this game, simply cut paper (I use card stock) and write numbers on them. Then lay out groups of items in different quantities. Your child can then match the number on the card with the number of items in each group. I also used card stock to create this number matching game. I wrote numbers on half the cards and drew a number of dots on the other half. You could easily draw something more fun such as starts, hearts or even use stickers. I like to keep things simple, though! Play as you would play Memory. Lastly, hide the cards with the dots around the house (or room). Have your child find them (that’s the fun part—you’ll be surprised at how much your preschooler will enjoy this!). Then, using small items such as coins, goldfish crackers or Legos, have your child place the items on top of the dots, counting them as they are matched. One-to-one correspondence as your child counts is crucial to counting correctly.
The Middle English word ure first appears in the 13th century, as a loanword from Old French ure, ore, from Latin hōra. Hora, in turn, derives from Greek ὥρα ("season, time of day, hour"). In terms of the Proto-Indo-European language, ὥρα is a cognate of English year and is derived from the Proto-Indo-European word *i̯ēro- ("year, summer"). The ure of Middle English and the Anglo-French houre gradually supplanted the Old English nouns tīd (which survives in Modern English as tide) and stund. Stund is the progenitor of stound, which remains an archaic synonym for hour. Stund is related to the Old High German stunta, from Germanic *stundō ("time, interval, while"). Ancient Egyptians used sundials that "divided a sunlit day into 10 parts plus two "twilight hours" in the morning and evening." The Greek astronomer, Andronicus of Cyrrhus, oversaw the construction of a horologion called the Tower of the Winds in Athens during the first century BCE. This structure tracked a 24-hour day using both sundials and mechanical hour indicators. Ancient Sumer and India also divided days into either one twelfth of the time between sunrise and sunset or one twenty-fourth of a full day. In either case the division reflected the widespread use of a duodecimal numbering system. The importance of 12 has been attributed to the number of lunar cycles in a year. In China, the whole day was divided into twelve parts. Astronomers in Egypt's Middle Kingdom (9th and 10th Dynasties) observed a set of 36 decan stars throughout the year. These star tables have been found on the lids of coffins of the period. The heliacal rising of the next decan star marked the start of a new civil week, which was then ten days. The period from sunset to sunrise was marked by 18 decan stars. Three of these were assigned to each of the two twilight periods, so the period of total darkness was marked by the remaining 12 decan stars, resulting in the 12 divisions of the night. The time between the appearance of each of these decan stars over the horizon during the night would have been about 40 modern minutes. During the New Kingdom, the system was simplified, using a set of 24 stars, 12 of which marked the passage of the night. Ancient Sinhalese in Sri Lanka divided a solar day into 60 Peya (now called Sinhala Peya). One Sinhala Peya was divided into 24 Vinadi. Since 60 (peya) x 24 (vinadi) = 24 (hours) x 60 (minutes), one Vinadi is equal to one present-day standard minute. Earlier definitions of the hour varied within these parameters: - One twelfth of the time from sunrise to sunset. As a consequence, hours on summer days were longer than on winter days, their length varying with latitude and even, to a small extent, with the local weather (since it affects the atmosphere's index of refraction). For this reason, these hours are sometimes called temporal, seasonal, or unequal hours. Romans, Greeks and Jews of the ancient world used this definition; as did the ancient Chinese and Japanese. The Romans and Greeks also divided the night into three or four night watches, but later the night (the time between sunset and sunrise) was also divided into twelve hours. When, in post-classical times, a clock showed these hours, its period had to be changed every morning and evening (for example by changing the length of its pendulum), or it had to keep to the position of the Sun on the ecliptic (see Prague Astronomical Clock). - One twenty-fourth of the apparent solar day (between one noon and the next, or between one sunset and the next). As a consequence hours varied a little, as the length of an apparent solar day varies throughout the year. When a clock showed these hours it had to be adjusted a few times in a month. These hours were sometimes referred to as equal or equinoctial hours. - One twenty-fourth of the mean solar day. See solar time for more information on the difference to the apparent solar day. When an accurate clock showed these hours it virtually never had to be adjusted. However, as the Earth's rotation slows down, this definition has been abandoned. See UTC. Many different ways of counting the hours have been used. Because sunrise, sunset, and, to a lesser extent, noon, are the conspicuous points in the day, starting to count at these times was, for most people in most early societies, much easier than starting at midnight. However, with accurate clocks and modern astronomical equipment (and the telegraph or similar means to transfer a time signal in a split-second), this issue is much less relevant. Counting from dawn In ancient and medieval cultures, the counting of hours generally started with sunrise. Before the widespread use of artificial light, societies were more concerned with the division between night and day, and daily routines often began when light was sufficient. Sunrise marked the beginning of the first hour (the zero hour), the middle of the day was at the end of the sixth hour and sunset at the end of the twelfth hour. This meant that the duration of hours varied with the season. In the Northern hemisphere, particularly in the more northerly latitudes, summer daytime hours were longer than winter daytime hours, each being one twelfth of the time between sunrise and sunset. These variable-length hours were variously known as temporal, unequal, or seasonal hours and were in use until the appearance of the mechanical clock, which furthered the adoption of equal length hours. This is also the system used in Jewish law and frequently called Talmudic hour ("Sha'a Zemanit") in a variety of texts. The talmudic hour is one twelfth of time elapsed from sunrise to sunset, day hours therefore being longer than night hours in the summer; in winter they reverse. The Indic day began at sunrise. The term "Hora" was used to indicate an hour. The time was measured based on the length of the shadow at day time. A "Hora" translated to 2.5 "Pe." There are 60 "Pe" per day, 60 minutes per "Pe" and 60 "Kshana" (snap of a finger or instant) per minute. "Pe" was measured with a bowl with a hole placed in still water. Time taken for this graduated bowl was one "Pe." Kings usually had an officer in charge of this clock. Counting from sunset In so-called Italian time, "Italian hours", or "Old Czech Time", the first hour started with the sunset Angelus bell (or at the end of dusk, i.e., half an hour after sunset, depending on local custom and geographical latitude). The hours were numbered from 1 to 24. For example, in Lugano, the sun rose in December during the 14th hour and noon was during the 19th hour; in June the Sun rose during the 7th hour and noon was in the 15th hour. Sunset was always at the end of the 24th hour. The clocks in church towers struck only from 1 to 12, thus only during night or early morning hours. This manner of counting hours had the advantage that everyone could easily know how much time they had to finish their day's work without artificial light. It was already widely used in Italy by the 14th century and lasted until the mid-18th century; it was officially abolished in 1755, or in some regions, customary, until the mid-19th century. The system of Italian hours can be seen on a number of clocks in Europe, where the dial is numbered from 1 to 24 in either Roman or Arabic numerals. The St Mark's Clock in Venice, and the Orloj in Prague are famous examples. It was also used in Poland and Bohemia until the 17th century. Counting from noon For many centuries, up to 1925, astronomers counted the hours and days from noon, because it was the easiest solar event to measure accurately. An advantage of this method (used in the Julian Date system, in which a new Julian Day begins at noon) is that the date doesn't change during a single night's observing. Counting from midnight In the modern 12-hour clock, counting the hours starts at midnight and restarts at noon. Hours are numbered 12, 1, 2, ..., 11. Solar noon is always close to 12 noon, differing according to the equation of time by as much as fifteen minutes either way. At the equinoxes sunrise is around 6 A.M. (ante meridiem, before noon), and sunset around 6 P.M. (post meridiem, after noon). In the modern 24-hour clock, counting the hours starts at midnight and hours are numbered from 0 to 23. Solar noon is always close to 12:00, again differing according to the equation of time. At the equinoxes sunrise is around 06:00 and sunset around 18:00. Derived measures and applications Although the SI unit for speed is metres per second, in everyday usage kilometres per hour or, in the USA and the UK, miles per hour are more practical. Occasionally the metre per hour is used for slow-moving objects like snails. Worker compensation is commonly based on working time in terms of number of hours worked, referred to as an hourly wage. Worker schedules are categorized by number of work hours per day or number of work hours per week; these are regulated and distinguish part-time from full-time jobs. The man-hour describes the amount of work that a person can complete in one hour. Many professionals such as lawyers and therapists charge a fee per hour. The credit hour measures the time commitment of an academic course (typically university level) in terms of "contact hours" between students and staff per week. For example, a class that meets for one hour three times a week is said to be a 3-hour class. The kilowatt hour, the energy expended by a 1000 watt device in one hour, is commonly used as a billing unit for energy delivered to consumers by electric utilities. Conversely, the Btu per hour is a unit of power used in the power industry and heating/cooling applications. In the railroad industry, when sharing locomotives, the horsepower-hour may be used as a measure of energy. |Wikimedia Commons has media related to Hour.| Notes and references - "Non-SI units accepted for use with the SI, and units based on fundamental constants". Bureau International des Poids et Mesures. 2007-04-11. - "Hour". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. - "A Walk Through Time". National Institute of Standards and Technology. Retrieved 2 April 2014. - Landes, David S. 'Revolution in Time. Harvard University Press, 1983, p. 76. - Holford-Strevens, Leofranc (2005). The History of Time: A Very Short Introduction. Very Short Introductions 133. Oxford: Oxford University Press. ISBN 9780192804990. - There is a "trace" of that system, for instance, in Verdi's operas where in Rigoletto or in Un ballo in maschera midnight is announced by the bell striking 6 times, not 12 as we are accustomed to it today! But in his last opera, Falstaff, strangely, he abandoned that style, perhaps under influence of contemporary trends at end of 19th century when he composed it, and the midnight bell strikes 12 times. |Look up hour or stound in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Hour.| - Gerhard Dohrn-van Rossum (1996). History of the hour: clocks and modern temporal orders. University of Chicago Press. ISBN 0-226-15510-2. - "Astronomy before the telescope". Ed. Christopher Walker. London: British Museum Press, 1996.
you have a scene in your book where the U-Boat is attacking from the surface. Why is that? A German U-Boat wasn’t a submarine as we think of them today. U-Boat is the abbreviation of Unter See Boot or “under sea boat” that is, a boat which could go underwater for a limited time. The boats could not remained submerged for a long period of time. 24 hours was about all the men could take because the air became so foul and the CO 2 levels became unhealthy. The record is 63 hours, I think, and most of the crew had passed out from CO 2 poisoning. While submerged, a U-Boat, running at full speed on its electric motors, could only make six knots. A submerged boat making that speed would run completely out of battery power in four hours. On the surface, however, running at full speed on its diesel motors, most U-Boats could make seventeen knots or better and could do so for days if they needed to. Therefore attacking at night from the surface was the preferred method of attack. Boats would typically approach a convoy up moon, that is on the dark side of the convoy so the boat would not be silhouetted against the moon. This firing position was critical because a U-Boat on the surface at night was very difficult for convoy lookouts to spot unless illuminated against the moon. In the first years of the war, when the great aces made their reputations, almost all of these attacks were made from the surface at night. By the summer of 1943, most Allied escort ships were equipped with radar which robbed the U-Boat of the advantages of making night attacks from the surface. In this classic and often reproduced photograph, the U-Boat kommandant is supposedly making a submerged attack. The photo is posed, of course, and the man at the scope if Kapitanleutnant Kurt Diggins, who served as Captain Langsdorff’s flag lieutenant or personal orderly officer as the Germans called it, aboard the Admiral Graf Spee. Diggins escaped from Argentina, returned to Germany, and later received command of a U-458. In the photograph, Diggins is actually in the control room of the boat looking through the sky periscope which kommandants used to search the sea and sky before surfacing. The attack periscope with its special ranging lenses and marks, used when the boat made a submerged attack, is actually in the conning tower just above Diggins. The space is so small it was impossible to get a picture of a U-Boat kommandant sitting at the attack periscope. He did not excel as kommandant of U-458, sinking only two ships for a total of 8,000 tons before his boat was sunk on 22 August 1943. Most of the crew, including Diggins, escaped from the sinking U-Boat and were picked up by the Royal Navy and made prisoners of war. Diggins later served in the West German navy and only recently died, in 2007, at age 94.
A close up of Doppler Shift Doppler shift is a phenomenon which is commonly observed by the lay person, yet still confuses many amateur satellite operators. This page intends to de-mystify Doppler and presents typical Doppler scenarios for low Earth orbiting satellite operation. Many amateurs have been confused by the Doppler shift encountered during satellite operation. In essence, Doppler shift is simply the apparent change in frequency that is observed when an object moves towards or away from an observer. It is a property common to wave propagation. Almost everyone has experienced Doppler shift in everyday life. Here's a couple of common examples: 1. You are waiting at a level crossing for a train to pass. The train is an express train, and is moving at high speed. As the train approaches, the driver sounds the horn and keeps it on until the locomotive is well past the level crossing. Sitting in your car, you hear the pitch of the horn go lower as the train passes and moves away. At the same time, a passenger on the train hears the horn. However, the horn's pitch doesn't appear to vary, but the bells of the level crossing appear to become lower in pitch as the passenger's carriage passes the level crossing. Sound familiar? 2. You are walking along a main road when an ambulance passes with its siren on. As the ambulance goes past where you're walking, the pitch of the siren becomes lower. 3. You live near a small airport, where the aircraft are mainly propeller driven. A plane takes off on a course over your roof. As it passes, the note of the engine becomes deeper (like the sound effects in old war movies!). 4. Similarly, the "VRRROOOOOOOOOOOMMMM" sounds that kids make when imitating cars driving past at high speed was learnt from their (unconscious) observations of the effects of Doppler shift on the sound of the engines as cars drive past (anyone listened closely to Formula 1 telecasts on the TV?). :-) There's just a few everyday examples. Similarly, radio waves are affected by Doppler shift. However, because the speed of light (and therefore radio waves) is much higher than that of sound, everyday Doppler shifts are quite small, maybe a few Hz for a VHF station mobile in a car. The only terrestrial operators who would normally notice Doppler shift are those who attempt SSB mobile on the microwave bands, and those who work aurora scatter. Doppler shift is proportional to the frequency of operation, so it becomes more significant on the higher bands. Satellites travel at much higher speeds, typically 27,000 km/h for a low Earth orbiting satellite. At this speed, Doppler shift becomes very significant for SSB operators on all satellite bands (21 MHz and up), and is noticeable to FM operators on 145 MHz and must be compensated for on any higher band. I have mentioned techniques for Doppler compensation in other articles, so instead, I'll present information on typical Doppler effects encountered on LEO amateur satellites. The following information assumes a satellite in a circular orbit at an altitude of 800 km. This is a common orbit for amateur satellites (in fact, the raw data was obtained from pass predictions of UO-14, which is in such an orbit). The data was collected by using Winorbit to generate tables of Doppler shift against time every 5 seconds for various elevation passes. The raw data was then plotted in Excel to give a graphical presentation of Doppler shift as a pass progresses. Finally, the frequency shifts were scaled from 70cm (UO-14's downlink band) to several amateur bands, to show the effect of carrier frequency on Doppler. Firstly, the amount of Doppler shift for LEO at 800 km varies within these ranges: |Max Doppler||+/- 477 Hz||+/- 659 Hz||+/- 3.27 kHz||+/- 9.76 kHz||+/- 28.5 kHz||+/- 53.8 kHz||+/- 230 kHz| Table 1. Maximum Doppler Shift Vs Frequency for Popular Amateur Bands for an LEO at 800km Altitude. The table above shows how Doppler shift increases with frequency. For SSB/CW, it should be obvious that Doppler will significantly impact operations on any of the bands given, and must be compensated for. However, for FM, the Doppler shift on 2 metres (3.27 kHz) is still small enough to be workable (with some distortion) on a fixed frequency receiver. On 70cm, even FM receivers much be retuned 3 or 4 times during a typical pass. By the time one gets to 10 GHz, only wideband modes and/or computer controlled stations would be able to cope with the severe Doppler shift one would encounter. However, 10 GHz isn't used on any current LEOs, but will be active on Phase 3D, where the higher orbit and slower satellite motion will mean the Doppler shift will be less in magnitude, and less variable over a given short time period. Just for comparison, here's typical Doppler shifts for a car travelling at 100 km/h. Hardly enough to keep you reaching for the VFO dial, unless you operate on bands over 23cm, but 10 GHz mobile SSB would be interesting indeed! :-) |Max Doppler||+/- 1.76 Hz||+/- 2.44 Hz||+/- 12.1 Hz||+/- 36.2 Hz||+/- 105 Hz||+/- 199 Hz||+/- 849 Hz| Table 2. Maximum Doppler Shift Vs Frequency for Popular Amateur Bands for a car travelling at 100 km/h. The other issue with Doppler shift is how it varies during the satellite pass itself. This is highly dependent on the pass itself. As the graphs show, low elevation passes have a fairly linear variation of Doppler shift, spread out over the pass. On the other hand, a pass directly overhead has most of the Doppler shift variation concentrated around the middle of the pass, which means a relaxed start and finish, but lots of VFO twiddling around mid pass! :) Below are 4 graphs of Doppler shift Vs time for 4 different UO-14 passes (11 degree, 20 deg, 47 deg and overhead). The downlink (on which the Doppler was calculated) is on 435.070 MHz, and the graphs show how far removed the actual frequency received on the ground would be from the nominal carrier frequency. Note the shape of the curve for each pass. With the above graphs, notice how the Doppler shift starts positive early in the pass (i.e. when the satellite approaches), and ends up at the maximum negative value at the end of the pass. This is just like the everyday examples of Doppler shift with sounds given at the start of this article. On the overhead pass (the bottom graph), the Doppler shift takes over 5 minutes to change by 2 kHz. The next 2 kHz takes just under a minute, and by mid pass, the rate of change is around 5.8 kHz/minute. Compare this behaviour to the 11 degree pass in the top graph, which shows a much flatter curve spread across the whole pass. Doppler shift is inherent in any mobile radio communications, but is only significant to amateurs at extremely high frequencies, or for satellite operation. All satellite operators (with the possible exception of SO-35 parrot users) need to take Doppler into account. Satellite Doppler shift doesn't vary in a simple linear fashion during a pass, but instead has the greatest rate of change at mid pass. Overhead passes show more extreme variation in the rate of change of Doppler shift than low elevation passes. The implication is that Doppler compensation is best done either manually or by computer control, as simple linear frequency changes over time will not track the Doppler well.
These games teach valuable skills and have a high fun and educational rating. Your child will develop basics in reading books, sighting words and attention/listening skills by helping the Wonder Pets save a baby triceratops. Your child develops knowledge about animals and learns to identify similarities between parents and their children by watching these videos. Your child develops knowledge about elephants by watching this video. Your child develops an array of addition skills on Dora's first day back to school. Your child develops literacy and musical understanding as they learn about the work of musician Jacques Offenbach. Your child develops phonemic awareness by finding the long '-o' words in the picture. Your child will practice addition and understanding quantities while helping Dora and her friends get to their first day of school. Your child develops literacy/phonics skills and learns about the work of musician Wolfgang Amadeus Mozart by reading and/or listening to this book. Your child develops phonemic awareness by finding the '-or' words in the picture. Your child develops phonemic awareness by finding the '-Y' words in the picture.
In Australia, hot water systems are an essential part of daily life. It is commonly used in residential and commercial buildings to provide heated water for a variety of purposes including showering, bathing, dishwashing, and laundry. The type of hot water system used in Australia is determined by several factors, including building size, number of occupants, location, and energy efficiency requirements. However, did you know that hot water temperature regulation in Australia is regulated by law? And why should it be regulated? The authority that regulates hot water regulations in Melbourne is the Victorian Building Authority (VBA). According to VBA, every year, hot tap water scalds many children, the elderly, and the disabled across Australia. More than 90% of these scalds happen in the bathroom, where the water temperature from showers or taps is too hot and a person cannot react quickly enough to avoid scalding. This may not appear to be a significant temperature difference, but it can mean the difference between permanent scarring, agonising pain, hospitalisation and skin grafts, or a relatively minor injury. Severe scalding can even result in death in some cases. This is why plumbing codes require a maximum temperature of 50°C at each shower head or tap outlet. This temperature is suitable for a bath or shower but not hot enough to cause severe scalding. The maximum water settings listed above are not bathing temperatures; for baths and showers, the recommended maximum bathing temperature for young children is 37 to 38°C. You may need to mix cold and hot water. In addition, here are ways to reduce the risk of burns in the bathroom: To prevent bacteria growth (such as Legionella), hot water must be stored above 60°C in hot water systems. Legionella is a type of bacteria that can cause Legionnaires' disease, a severe form of pneumonia. The bacteria can be found in both natural and man-made water systems, such as hot tubs, cooling towers, and plumbing systems, and they can be spread through the air when contaminated water droplets are inhaled. People who are older, have weakened immune systems, or have underlying medical conditions are more likely to become ill from Legionella. Legionnaires' disease symptoms include fever, coughing, shortness of breath, muscle aches, and headaches. It can cause complications such as respiratory failure, septic shock, and even death in severe cases. The actual water in your hot water system, on the other hand, must be stored at a minimum of 60°C to prevent the growth of harmful bacteria such as Legionella. If you're looking for a specific number, here it is 50°C. That is, the water that comes out of your hot bath tap or shower head when only the hot water is turned on should be no hotter than that - hot enough for a relaxing bath but not hot enough to cause immediate or long-term damage. The main exception to the 50°C rule is for environments designed specifically for vulnerable Australians, such as nursing homes and schools. The general maximum in these cases is 45°C. If you are unsure about the hot water temperatures in your home, whether in your hot water system or in your taps, give Hot Water Melbourne a call and we will arrange for one of our plumbers to come out and check the temperature and install a tempering valve today. 247 Hot Water Melbourne is a leading hot water Melbourne company that has been providing residential and commercial clients with dependable, timely, and cost-effective services for many years.
As teenagers transition into adulthood, they face many challenges, including the persistent problem of bullying. Bullying is a widespread issue that affects young people from all backgrounds and locations. It takes various forms, such as verbal, physical, social, and cyberbullying, and has severe consequences on young minds. Victims often experience emotional distress, academic consequences, social isolation, and physical health problems. To combat bullying, everyone must work together. Schools must implement strict anti-bullying policies and offer resources for prevention and intervention. Parents and guardians should actively participate and communicate openly with their children to create a supportive home environment. Encouraging empathy and standing up against bullying can significantly impact adolescent relationships. Building empathy in teenagers is a crucial step in combating bullying, and educational programs and community initiatives can foster it. Creating safe spaces in communities, schools, and online platforms is essential. Young people should feel heard, valued, and protected. Initiatives that promote inclusivity, tolerance, and conflict resolution can contribute to these safe havens. It is crucial to acknowledge the existence of teen bullying and work together to build a world where adolescents can thrive without the threat of bullying. In this haven, young individuals can navigate their formative years with confidence, resilience, and the unwavering support of their communities. The teenage years are a time of self-discovery, growth, and transition. Yet, amidst the challenges and triumphs, a persistent issue continues to cast a shadow over the lives of many adolescents: bullying. This article explores the complex landscape of teen bullying, its profound impact, and how communities can come together to create a safe haven for their young members. The Reality of Teen Bullying Teen bullying is a deeply ingrained problem that transcends geographical boundaries and social backgrounds. It comes in various forms: The Impact on Adolescents Teen bullying has far-reaching consequences on young minds: The Role of Adults and Peers Addressing teen bullying necessitates a collective effort: The Importance of Empathy Building empathy in teenagers is a crucial step in combating bullying. When young individuals can understand and share the feelings of their peers, they are less likely to engage in hurtful behaviors. Educational programs and community initiatives can foster empathy, creating a more compassionate society. Creating Safe Spaces Communities, schools, and online platforms should strive to be safe spaces where teenagers feel heard, valued, and protected. Initiatives encouraging inclusivity, tolerance, and conflict resolution can contribute to these safe havens. Teen bullying is a persistent challenge that demands our unwavering attention. By acknowledging its existence, fostering empathy, and promoting open communication, we can work together to build a world where adolescents can flourish without the looming threat of bullying. In this haven, young individuals can navigate their formative years with confidence, resilience, and the unwavering support of their communities.
Solving Limits or any Calculus Limit requires simple mastering over the techniques given here. In this section, you will find everything you need to know about solving limits questions JEE and calculus problems involving limits. At Vedantu, the experts have prepared a list of all possible cases of problems which are an ultimate resource for solving limits. Using these techniques, you will be able to solve any kind of problem involving limits in calculus. You'll also find limits solved problems PDF and tips for every type of limit in calculus. However, if you still want to receive more lessons covering everything in calculus directly into your email, get on-board with Vedantu experts. How To Solve Limits Let’s get started to learn the idea behind limits and problem-solving techniques. Following are the various Limits of a Function: Evaluate limits using direct substitution Evaluate limits using factoring and cancelling Evaluate limits by expanding and simplifying Evaluate limits by combining fractions Evaluate limits by multiplying by the conjugate Type 1: Limits by Direct Substitution These are the simplest problems involving limits in calculus. In these problems, you are only required to substitute the value to which the independent value is approaching. As a limits examples and solutions: x → a In the case, if ‘f’ is a polynomial and ‘a’ is the domain of f, then we simply replace ‘x’ by ‘a’ to obtain:- x → a – a² The technique we use here is related to the concept of continuity. You can also solve Limits by Continuity. Type 2: Limits by Factoring Now this one is an interesting way of solving limits. In these limits, if you try to substitute, you get an indetermination. For example: Lim x² x² - 1/ x-1 x → 1 If you simply substitute x by 1 in the mathematical equation, you will get 0/0. So, what can be done? We can use our algebraic skills to simplify the expression. In the example given earlier, we can factor the numerator: Lim x² x² - 1/ x-1 = lim (x-1) (x+1)/ x-1 = lim (x+1) = 2 x → 1 x → 1 x → 1 You will spot these types of problems easily whenever you observe a quotient of two polynomials. You could attempt this technique if there is an indetermination. Type 3: Limits by Rationalization This type of technique involves limits with square roots. In these types of limits, we use an algebraic technique called rationalization to solve limits. For example: Lim 1- √x / 1-x x → 1 If we simply substitute, we obtain 0/0 and we cannot factor this. The strategy is to multiply and divide the fraction by an appropriate expression. (Keep in mind that if you multiply and divide a number by the same element you obtain the same number). In this case, we use the identity given below: (√a - √b) (√a + √b) = a - b You need to only perform the product on the left to verify it. So, as and when you spot the sum or difference of the two square roots, you can use the previous identity to the case. The two factors on the left are what we call conjugate expressions. Solved Examples on How To Solve Limits You will find the following types of limits examples and solutions in the JEE limits question bank provided by Vedantu. Example: Identify the limit of the following expression? Lim x² - 5 / x² + x - 30 x → 5 Though the limit given is the ratio of two polynomials, x = 5. This makes both the numerator and denominator equal to zero (0). We have to factor both numerator and denominator as given below. Lim (x – 5) (x + 5) / (x – 5) (x + 6) x → 5 Simplify the expression to get:- Lim (x + 5) / (x + 6) = 10/11 x → 5
Granite colors range the spectrum from white to black to pink, but what makes a single rock type so variable? Here we will discover what makes each granite a different color, what that tells us about its mineralogy and origin. You may be an amateur geologist, just curious, or looking for your new granite countertops. Regardless of the reason, you'll be amazed at the vast varieties of different granites. Granite is one of the most commonly known types of rocks, used in everything from buildings to sculptures. It has been used for thousands of years and is regarded as a symbol of status, strength, and durability. What is Granite? Granite is an intrusive igneous rock with large grains (minerals) easily seen by the naked eye. Granite colors are most commonly pink, white, variations of grey and black. However, it's important to note that some stones marketed as black 'granite' are in fact likely gabbro as granite must contain at least 20% quartz within a rock to make it granite. Now, let's break down what exactly an intrusive igneous rock is: - An intrusive rock means that molten rock cooled within the crust and was never expelled as molten rock. The gradual cooling of molten rock is imperative to create the large crystals of a singular mineral that we see in granites. With time, there is differential lithification or solidifying of molten rock dependent on chemical makeup, this allows for different types of minerals to form at different periods of time and alter the final resulting granite. Therefore, the size of individual grains is proportional to how slowly the molten rock was cooled. Extrusive rocks cool during a volcanic eruption and allow no time for orientation of minerals, creating a homogenous looking rock with no discernible grains. - An igneous rock is a rock that has solidified from molten rock. This is in comparison to the two other major types of rock, sedimentary and metamorphic. What Determines Granite Colors? Granite is a conglomerate of minerals and rocks, primarily quartz, potassium feldspar, mica, amphiboles, and trace other minerals. Granite typically contains 20-60% quartz, 10-65% feldspar, and 5-15% micas (biotite or muscovite). The minerals that make up granite give it the unique colors we see in different types of granite. The relative proportion of different colored minerals in a granite is largely due to the original source of molten rock that cooled to form the granite. If the molten rock was abundant in potassium feldspar, the granite is more likely to take on a salmon pink color. On the other hand, if the molten rock is abundant in quartz and minerals that make up amphibole, you will likely get a black and white speckled granite commonly seen on countertops. - Quartz - typically milky white color - Feldspar - typically off-white color - Potassium Feldspar - typically salmon pink color - Biotite - typically black or dark brown color - Muscovite - typically metallic gold or yellow color - Amphibole - typically black or dark green color The combination of the minerals above make up most of the colors we typically see in a granite. Now, let's break down the distance types of granite and a quick overview of what gives them their color White granite is a granite that is composed primarily of quartz (milky white) and feldspar (opaque white) minerals. The small black specks in the granite above are likely small amphibole grains. This could be due to a lack of chemical components needed to form amphibole, or the cooling process was not amenable to formation of amphiboles. If you see a rock that is 100% white, it is not granite but more likely a man-made rock that is created to look like granite or a quartz (quartzite) countertop. "Black granite" is commonly seen in commercial rock, but it is not granite at all. As said above, granite must be at least 20% quartz, which means an all black rock is not a granite. Most commonly, black granite is in fact gabbro, a mafic intrusive igneous rock similar to basalt. Gabbro is primarily composed of minerals pyroxene, plagioclase, and small amounts of olivine (dark green) and amphibole. Pink colored granite is a result of an abundance of potassium feldspar within the granite. You can see small specs of milky semi-transparent quartz, dark brown/black amphibole, and opaque white feldspar. However, in a granite like the one above the primary mineral is potassium feldspar. Black And White Granite The above granite appears to have equal parts quartz, feldspar, and amphibole, making a speckled black and white granite. This is one of the most common types of granite and one that is most commonly seen used for granite countertops. Red granite is a variation of pink potassium feldspar abundant granite, where the k-feldspar takes on a redder than pinker color. Also, you can get red coloring from iron oxide in hematite grains or inclusion within feldspar, essentially the same process that makes rusted metal ruby red colored. You may find advertisements for blue granite countertops but this is also almost certainly not granite. One potential is that the rock is actually Larvikite, an igneous variety of monzonite and sometimes referred to as "blue granite" despite it not being granite. Another common alternative is Anorthosite, a rock that contains abundant blue labradorite and is sometimes sold as blue granite. When advertised as green granite, often times the stone is actually a green variety of marble, which gains its green coloration due to inclusions of serpentine. It could also be a green variation of soapstone, mislabeled as granite. Granites are not abundant in green colored minerals, but there are a variety of different rock types that do contain green minerals in abundance. One very unusual way to get a green coloration is the inclusion of amazonite, a green variety of feldspar. Common Types Of Granite Lastly, we'll go through some of the most common types of granite and what gives them the color they have inherited. - Santa (St.) Cecilia granite - This granite is known for its many garnets (deep red minerals) with tannish feldspar, quartz, and dark biotite. - Uba Tuba granite - Uba Tuba granite is a type of granite mined in Brazil that takes on a dark color due to an abundance of mica. - Kashmir White granite - Kashmir White granite is primarily composed of white feldspar and quartz, with distinctive red garnet crystals. This is not actually a granite, but a metamorphic rock. - (New) Venetian Gold granite - A mixture of tan and white feldspar and quartz minerals with amphibole, mica, and garnets to add dark black and red coloring. - Giallo Ornamental granite - Some versions of this granite appear to be partially metamorphosed, bringing it into the category of a gneiss. The metamorphosis, a result of heat and pressure, gives it the swirl texture. This granite has very little accessory minerals and is primarily white due to feldspar and quartz. - Tan Brown granite - The tan here refers to a variation of feldspar, with trace amounts of potassium to give it a very faint pink color. The brown and black flecks are likely abundant amphibole. - Baltic Brown granite - This type of granite is very similar to the tan brown granite, but with larger feldspar grains. - Black Pearl granite - This is not actually a granite, but a type of gabbro with pyroxene and amphibole. - Bianco Antico granite - This granite is primarily quartz, with pink flecks of feldspar sourced from Brazil. - Black Galaxy granite - This granite is actually a type of fine to medium grained gabbro, black with golden flecks. - Volga Blue granite - This is actually an Anthrosite, an intrusive igneous rock that gets its iridescent blue color from labradorescence. - Absolute Black granite - Again, this is a type of gabbro and not a granite, similar to the Black Pearl granite above. That wraps up this guide to granite colors and hopefully taught you a lot about the different varieties of granites, from white granite to 'black granite.' Leave a comment below with your favorite.
What is a gastrostomy? A gastrostomy is a surgically created opening in your belly. A tube is placed through the opening in your belly and into the stomach to give you food and fluids. You may need a gastrostomy if you cannot swallow or you are not able to eat enough food for good nutrition. A balloon or plastic cap inside the stomach holds the tube in place and prevents leakage. There may be a few inches of the tube sticking out of the opening that can be closed with a clamp or plug, or there may be a button right next to the skin that can be opened to give food and fluids and then closed. The tube is also called a G-tube or feeding tube. How do I care for a gastrostomy at home? Your medical care team will teach you what you need to know to feel safe and comfortable taking care of the gastrostomy at home. Cleaning and caring for the gastrostomy site - Wash your hands with soap and water before and after you touch the area. - Use warm water and soap to clean around the gastrostomy site 2 to 3 times a day or as needed. - Make sure that you gently soak or scrub off all crusted areas on the skin around the tube and on the tube itself. You may need to use a diluted solution of hydrogen peroxide (1/2 peroxide and 1/2 water) and cotton tipped swabs to clean around the tube site. - After cleaning, rinse around the area with water and pat dry. - Ask your healthcare provider if you should use an antibiotic ointment on the area if it looks red or sore. - Secure the tube as instructed by your healthcare provider. Your healthcare provider will tell you when it is safe to start taking baths or showers again. When you are able to take a bath or shower, remember to: - Clamp the G-tube or close the valve on the gastrostomy button before bathing. - Make sure the water is not too warm, so that it does not irritate tender skin. - Use only mild soaps and soft washcloths. Your healthcare provider will tell you when you can go back to your normal activities. You may be told to avoid lifting for 6 weeks. Make sure that the G-tube is carefully secured under clothing. A G-tube should not keep you from returning to work and should not keep you from most activities. If you have questions, ask your healthcare provider. You can travel with a G-tube. Always take a travel kit of emergency supplies with you. The travel kit should include all of the items that you usually need to care for the G-tube. You may also need a replacement tube, supplies, and instructions for replacing your G-tube in case it accidentally comes out. What problems might I have with a gastrostomy tube? Possible problems with a gastrostomy include: - Blocked tube Food or medicine may build up in the tube or body fluids may crust around the opening and block the tube. Follow your healthcare provider’s instructions for flushing the tube to clear a blockage. If the tube still seems blocked, call your healthcare provider. - Drainage around the gastrostomy Some drainage around the gastrostomy is normal, especially soon after the gastrostomy is put in. Clean the skin around it often with mild soap and water. Make sure you remove all crusted areas from the tube itself. This helps prevent infection. Call your healthcare provider if leaking or drainage continues or if the site becomes painful. Vomiting may be caused by the tube moving forward into the stomach and blocking the stomach outlet. Follow your healthcare provider’s instructions for checking the placement of the tube. Excessive gas and overfeeding can cause bloating of the stomach and vomiting. Removing the clamp or the plug or opening the button at the end of the G-tube will allow air to escape and gradually relieve the problem. Diarrhea is a common problem for people with a gastrostomy tube. There are many possible causes of diarrhea, such as the type of liquid food, medicines, changes in the normal bacteria levels in the stomach and intestines, and how fast the liquid food is given. If you have diarrhea, talk to your healthcare provider about possible causes and treatment. - Breakdown of the G-tube Over time, the rubber tube will break down and get harder to use. Many times the end used to add the feeding formula will break off or split. These are signs that the tube needs to be replaced. Most tubes last for 3 to 6 months, if you need one for that long. Developed by RelayHealth. This content is reviewed periodically and is subject to change as new health information becomes available. The information is intended to inform and educate and is not a replacement for medical evaluation, advice, diagnosis or treatment by a healthcare professional. Copyright ©1986-2015 McKesson Corporation and/or one of its subsidiaries. All rights reserved.
Women's Equality Day: Seen and Heard One of the most basic and powerful human needs is to be seen and heard. We all want to feel valued, important, and recognized for who we are. It’s how we connect with others, showing them that they matter to us and that we care about them. Today we’re celebrating Women’s Equality Day to commemorate the 100th anniversary of women’s suffrage. It’s an amazing true story of how women worked together to earn the right to be seen and heard on a national scale. Prior to the summer of 1920, women didn’t have the legal right to vote in the United States. Every four years on the first Tuesday in November, they had to sit at home while their fathers, brothers, and husbands got dressed up and went to the polls to cast votes for who would represent them in government. Women in America had to abide by the government’s rules, but they didn’t have a say in choosing who would make those rules. Congress eventually passed the 19th amendment to the U.S. Constitution in 1919 (giving women the right to vote), but it required ratification by at least 36 states to become law. That happened on August 26, 1920 – Happy 100th Anniversary, Women’s Right to Vote! Two of the central figures in the women’s suffrage movement were Alice Paul and Lucy Burns. They were best friends who worked together and played off each other’s strengths and weaknesses to become a powerful team leading the fight for the right to vote. Alice was a descendant of William Penn, raised with the Quaker conviction of service to others as her life’s purpose. She was modest and reserved, but with a steely determination that made her a formidable leader. In contrast, Lucy was vivacious and charming, a whip-smart Irish Catholic girl whose diplomatic skills were unsurpassed. Alice and Lucy met in a police station, after they had both been arrested for demonstrating for the right to vote. From all accounts, they couldn’t have been more different in their demeanor and personality traits, but they were both smart, driven, and passionate about improving the lives of women in America, and their friendship and partnership in the women’s suffrage movement garnered extraordinary results. In 1916, after several attempts to secure the right to vote had failed, Alice organized the National Women’s Party with Lucy’s help. Alice’s strengths were in planning, tactical moves, and using her family’s connections to raise money to support the cause. Lucy’s strengths were in communications and organizing; she tirelessly gave speeches and wrote news bulletins for the media to spread the word for women’s suffrage. These two women were fearless - they were jailed many times for their activism, and they suffered horrible conditions in prison. While in jail, they organized hunger strikes to protest their brutal treatment, and when the authorities were afraid the women might die, they were force-fed through a tube by prison guards before they were finally freed. Alice and Lucy understood their differences, strengths, and weaknesses. Lucy didn’t try to convince Alice to be a livelier speaker, and Alice didn’t try to convince Lucy to be a more militant negotiator. They recognized what was unique about the other and employed those attributes to further the cause of women’s suffrage. One could argue that if they hadn’t made each other feel seen or heard, the women’s suffrage movement might not have been as successful. Their friendship and partnership in fighting for something bigger than both of them is an example that all of us can follow. Alice and Lucy were living in an extraordinary time in American history, when their relatively new country was involved in the first global war (World War I) and the industrial revolution was changing the way people went about their daily lives. Despite those difficult times, Alice and Lucy were able to make each other feel seen and heard, while also helping all women in America to have the legal right to be seen and heard. We can do the same in our own extraordinary time in American history, following the example of Alice Paul and Lucy Burns. Celebrate Women’s Equality Day by making someone else feel seen and heard today!
Fir Tree Plant. They are closely related to other coniferous trees in the family pinaceae such as pines and cedars. Knowing the scientific name of true fir trees can help to identify them. Planting a douglas fir tree they grow best in areas with cold winters and hot summers in u.s. How to plant a fir tree in the garden step by step? Location is the most important factor in planting a healthy fir tree. Excellent Specimen Plant Or Used En Masse To Create Screening. Depending on the species, a fir tree can thrive in cool, moist climates in usda growing zones 3 to 8. Native geographic location and habitat: Possess wide lower branches and develop into more of a downturned shape. It Grows 40 To 80 Feet High And 15 To 20 Feet Wide In Landscape Situations. It makes a great home for wildlife. Fill the hole with loose, moist soil while holding the tree at its proper depth. Some good choices for ground cover plants under trees include:. They Are Closely Related To Other Coniferous Trees In The Family Pinaceae Such As Pines And Cedars. Except hard packed clay soil, these trees grow in most types of soil. They prefer cool, moist growing conditions. Native to seattle, the grand fir is truly grand. Many Fir Trees Have A Shallow Network Of Roots And Need To Be Planted In An Area That Is Protected From High Winds. Knowing the scientific name of true fir trees can help to identify them. Do not plant a fir tree too close to a structure. When planting any type of fir tree, location is important to ensure a healthy tree. These Trees Belong To The Species Abies And Are Woody In Nature. Fir trees some abies varieties are perfect for adding height and structure, making a useful backdrop to more showy plants at the rear of a shrubbery. The fir tree has to be planted in an area which is protected from high winds. Cones can be purple, green, or blue, before changing to a golden brown.
Like most cats, I don’t love wet fur. I check a weather app every morning to see if I need an umbrella. But how rain happens was a mystery to me. So, I talked about rain with my friend Nathan Santo Domingo. He’s a field meteorologist with AgWeatherNet of Washington State University. That’s a weather tool for farmers, gardeners and other people in Washington. “The first thing to remember is that Earth’s surface is 71% water,” Santo Domingo said. “We also have a giant orb in the sky—the sun—that’s feeding energy into the atmosphere and reaching down to Earth’s surface.” The sun’s energy changes the water in the oceans, rivers and lakes. The water changes from a liquid to a gas called water vapor. That water vapor floats up into the bubble of gas that surrounds Earth—called the atmosphere. The higher the water vapor floats, the colder the air is. That changes the water vapor back into liquid water. Those drops of liquid water way up high in the atmosphere are incredibly tiny. They’re so light they float. A bunch of tiny drops all floating together is a cloud. Sometimes a cloud floats into a place with low air pressure. Or it bumps into a mountain. The tiny water drops move up and down. They bonk into each other. When two water drops bump together, they merge into a bigger water drop. “Eventually, a water droplet becomes so heavy the air can’t support it,” Santo Domingo said. “It starts to fall to the ground. It hits your head, jacket or umbrella in the form of a raindrop. Or a snowflake if it’s cold.” That rain flows back into the oceans, rivers and lakes. Someday, the sun’s energy will turn it back into water vapor. The journey a water drop makes from Earth’s surface up into the atmosphere and back is called the water cycle. Sometimes you can tell it’s going to rain by looking at the sky. But weather forecasts can tell us if it’s going to rain much farther out than our eyes can. Back in the day, weather scientists used tools like thermometers and barometers to predict rain. Thermometers measure changes in how hot it is. Barometers measure how much pressure there is. That’s how much air is above you, pushing down due to gravity. Weather scientists still use those tools. Now they also use supercomputers to track temperature and air pressure. They measure all the way up and down the atmosphere. They also use math equations about water, air, sunlight, plants and ocean temperatures to make predictions. That’s how weather scientists make accurate forecasts today. That way we can check a weather app and know if we need an umbrella to keep our fur dry.
Reading Workshop for the Secondary Classroom Unit 4 The complete fourth unit of Reading Workshop for the Secondary Classroom. Lessons, handouts, and activities provide you with everything you need to teach the unit. All lessons apply to any book your students may choose to read. All lessons are CCSS-aligned. Unit 4: Part of a Whole—Story Elements Nonfiction gets a bad rap. As we develop a love of reading in our students, let us not forget the importance and place of nonfiction. Extending beyond textbooks, manuals, and newspaper articles, nonfiction satisfies readers’ hunger to understand the world around them in a way that perhaps feels more credible than a fictitious story. Moreover, reading nonfiction develops a sound base of background knowledge in students and prepares them for more complex texts that become the foundation of reading in both higher education and the professional work force. This unit aims to formally introduce students to nonfiction text, equipping them with the skills necessary to do more than just skim, scan, and look for important ideas. By covering text features, text structure, content validity, and more, the unit lays the groundwork on which to build an understanding of nonfiction. - 1. Get the Lay of the Land—Identifying and Using Text Features - 2. Sneak Preview—Activating Prior Knowledge - 3. Digging for the Point—Identifying and Using Text Structure - 4. The Perfect Pick II—Learning How to Choose a Nonfiction Text - 5. Deciphering the Code—Finding the Central Idea and Using Support - 6. Good for the Digestion—Personally Reflecting on Information - 7. Digging for the Truth—Examining the Author - 8. Finding Fakes—Examining Credibility and Reliability - 9. Mark It Up—Learning to Annotate Download the FREE sample lesson located in "Additional Info." ©2019. Middle school, high school. Reproducible. PDF download, 90 pages. Adobe® Reader® required to view. About the Author Leslie Spurrier holds a bachelor's degree in English from Wake Forest University and a master's degree in Reading and Language Arts from Millersville University. She taught middle school English and reading for over seven years and lived to tell the tale. Her favorite literary character of all time is Anne Shirley, which made giving birth to a red-headed little girl all the more glorious. Leslie currently resides in Lititz, PA, with her husband and two children. When she's not voraciously devouring books of all sorts, she creates educational materials for her online business, Story Trekker, and desperately tries to keep her cat Shadow from scratching up her favorite chair.
|Radio and Electronics (DED Philippinen, 66 p.)| |5. MODULATION OF RADIOWAVES| As we don't have to learn about the circuits for a radio transmitter within this course, we will only describe roughly how it works. For such a rough introductory description it is helpful to use a special kind of diagram. This diagram is called a BLOCKDIAGRAM, and it shows only rectangular blocks, which visualize circuits generally, by announcing their function only. Fig. 31 shows the blockdiagram of a radio transmitter and fig 32 shows the signal how it will look like when it leaves the aerial. This course will deal from now on, mainly with the following question: HOW TO RECEIVE THIS SIGNAL IN A RADIO-RECEIVER? HOW TO PROCESS THIS SIGNAL UNTIL IT CAN BE HEARD AT THE SPEAKER? 1. Explain how electromagnetic waves are produced! 2. Mention the parameters of electromagnetic waves! 3. Which different waves do you know and which are their special characteristics? 4. Which different bands of radiowaves do you know? 5. Which of these bands is useful for long distance communication? 6. Which of these bands is useful for short distance communication? 7. Give the frequency ranges of the different wavebands? 8. What does the term Fading meand, and what is its effect on reception? 9. Which band is used for communication from spaceships to earth and back? 10. What is the reason why long distance radio communication is not totally reliable? 11. What does the term modulation mean? 12. Which types of modulation do you know? 13. Calculate the % modulation for the shown case in fig. 30!
Educators and researchers are always looking for ways to improve learning outcomes and keep students engaged with learning. Because of this, the education field is crowded with different theories about learning styles (now learning preferences) teaching styles, and other methods on how students learn. It’s important to understand the different types of learning preferences and prevailing theories when building online school and homeschool lessons, and when helping student’s study effectively to master difficult concepts. While one learning preference or theory won’t work for all students, learning about them can still help you identify your own student’s strengths and weaknesses. Some overtime have been met with criticism, but that’s not to say we can’t test some of their practices out to find out how our students best prefer to learn and study. They’re still popular today for a reason! Here is an overview of some of the popular learning theories and different learning preferences to help every student achieve success. The Multiple Intelligences Theory Some researchers believe in the Multiple Intelligences Theory, which claims that people have eight independent ways of processing information: - Verbal-linguistic: (Word smart) - Logical-mathematical: (Logic smart) - Visual-spatial: (Picture smart) - Auditory-musical: (Music smart) - Bodily-kinesthetic: (Body smart) - Interpersonal: (People smart) - Intrapersonal: (Self smart) - Naturalistic: (Nature smart) It’s more accurate to think of the eight intelligences as abilities or strengths. The human brain is extremely complex, and all of these types of “smarts” work together. Your student may have several of these strengths. How to use the Multiple Intelligences Theory To apply the Multiple Intelligences Theory to online school, teachers and Learning Coaches can use activities based on the intelligences to help students develop all of their learning strengths. Some activities help develop more than one strength at a time, offering a holistic way to support different types of learners. Here are some activities to engage your student: - Taking photos for the online school yearbook to exercise “picture smarts” - Making crystals to build “nature smarts” - Drawing a map to scale to exercise “logic smarts” and “picture smarts” - Spending 10 minutes writing about one of their best attributes to integrate “word smarts” and “intrapersonal smarts” Learning preferences focus on how students process information using their senses to absorb and retain what is being taught. While many people may say they have a “learning style,” they really have learning preferences, or ways they prefer to have lessons delivered to them. Students actually learn and retain more educational content when it is delivered – or taught – to them in a variety of different ways as opposed to only one way, even if that way is their preferred way to learn. Below are three different types of learning preferences to keep in mind when building online school or homeschool lessons. If your student is struggling with a difficult concept, you can also use these preferences as a guide to find a different way to deliver the learning content to them to help make things click. - Visual learners: Those who prefer to learn through images, graphs, maps, and drawings - Auditory learners: Students who prefer to learn by hearing and speaking new information - Tactile/kinesthetic learners: The student who prefers to learn by experiencing, touching, and performing tasks With these learning preferences in mind, here are some examples of delivering learning content in ways to reach all types of learners. - Have your student practice counting money by giving them real coins, which is a tactile/kinesthetic learner approach - When helping young readers, point to each word as you read it aloud, which uses both auditory and visual skills - To learn geography, study a map, which is a visual task Physical Activities for Kinesthetic Learners While physical activity is particularly important for kids who have a kinesthetic learning preference, taking breaks and staying active is important for all types of learners! Even if your student doesn’t gravitate toward hands-on or physical activities, you can still incorporate them into their online school routine to clear the mind and relieve stress. Try these learning activities to stimulate the body as well as the brain: - Have your student play multiplication catch or leap for measurement to practice skills through activity - Have your student use a stability ball instead of a chair for short periods of time to improve balance, posture, and upper-body strength - Start a family fitness challenge. Get the whole family involved in the new fitness plan, whether it’s by holding indoor scavenger hunts or creating your own unique activities - Do indoor physical education activities between lessons. Fun physical activities for grades K–12 can keep your student busy all year long, no matter what the weather is like outside Learning Support at Home Whether labeled as learning styles, preferences or intelligences, none of these strengths or abilities are static or fixed. They change as your student grows and matures. These approaches can be useful tools in a teacher’s or Learning Coach’s toolkit to add variety to your student’s online school experience and support learning at home, especially when helping your student become an independent learner. There are lots of benefits to an online school model like Connections Academy. One of them is being able to test out these learning theories for yourself with your student, while enjoying the flexibility of a custom schedule and personalized education model. Check out how an online school model really works, and see if it might be a right fit for you and your family.
Q: What is B-Rhymes? A: It’s a dictionary that shows you words that have a high degree of consonance, or similarity. A normal rhyming dictionary shows you words that fully rhyme, i.e. they are exactly the same in their last 1,2 or 3 syllables. B-Rhymes, aka slant rhymes, can have sounds that don’t rhyme, but still sound similar, so they still sound good together. Q: Why slant rhymes when you could have full rhymes? Slant rhymes still sound good, and have the advantage of being novel. People yawn once again hearing ‘lie’ rhymed with ‘die’. A few unexpected B-Rhymes sprinkled around surprises people and makes them pay attention. Q: How does B-Rhymes know what almost rhymes? A: It’s mostly based on the number of phonetic features that are different between two sounds. eg ‘t’ and ‘d’ are the same, except that ‘d’ is voiced and ‘t’ isn’t. With only a single difference ‘t’->’d’ gets a high rating, almost as high as ‘t’->’t’. In a regular rhyming dictionary t->d gets the rhyme thrown out. I suggest reading up on linguistics and phonology for more details. Q: What’s with the name ‘B-Rhymes’? A: They’re not A-Rhymes, they’re B-Rhymes, like b-celebrities. I guess C-Rhymes are words that don’t rhyme at all, aka Fail Rhymes. The scores are the total of the ratings of each pair of sounds from the two words. The more syllables that (almost) rhyme, and the closer they almost rhyme, the higher the score. Eg, consider B-Rhymes with ‘flexible’. Why does ‘lexical’ get a better score than indelible? Near rhymes with Flexible Even though the last syllable is a slant rhyme, flexible->lexical gets a higher score than flexible->indelible (a full rhyme) because it has additional sounds that rhyme earlier in the word.
Campbell Primary School has two preschool units - Campbell Preschool, located on school grounds, and the Allen Main Memorial Preschool at Duntroon. Strong relationships between preschool and primary school staff and students help to create a cohesive, stimulating and supportive environment that caters for individual needs and interests while maximising student learning. Play and Learning Children's learning through the medium of play has been examined and researched for many decades, and the role and purposes of play as a learning tool has been examined through theories and perspectives of children's learning (theorists include: Fredrich Froebel, Rudolph Steiner, John Dewey, Maria Montessori, Margaret MacMillan, Susan Isaacs, Jean Piaget, Lev Vygotsky). What do we know about play? We know that '... play shapes the architecture of the brain in unique ways; it links social, creative and cognitive skills' (Bartlett, 2010) The United Nations Convention on the Rights of the Child affirms '... play as a fundamental right of all children' (Article 31). The Early Years Learning Framework (p. 46) defines play-based learning as: A context for learning through which children organise and make sense of their social worlds, as they engage actively with people, objects and representations. This confirms that play is nationally and internationally valued for its contribution to young children's lives and learning. But, it doesn't tell us exactly what is meant by 'play' and what roles educators should fulfil as they interact with children in early learning settings. Drawing on the research of Dockett and Fleer (1999), Shipley (2008) and Lester and Russell (2008) Dr Lennie Barblett put forward seven basic characteristics of play: - Voluntary - something children choose to do, but other children can be invited to join in. - Pleasurable - a deep sense of enjoyment,which will vary from child to child - Symbolic - usually includes some typeof make believe or pretend and objects assume new meanings and purposes for the player/s. - Meaningful - to the player/s, but the meaning may not always be clear to an - Active - it requires active mental, verbal or physical engagement with people, objects or ideas. - Process oriented - it's enjoyed for the activity itself, not concerned with an end product. - Intrinsically motivated - it is its own reward.
I can’t imagine a series of books about knights in the Middle Ages that didn’t include something about jousting. In The Eldridge Conspiracy (K4), the fourth and final book in my Sir Kaye, the Boy Knight series there is an important aspect of the plot that includes jousting. So for my second K4 research blog, here are some interesting facts about jousting in the middle ages. What is jousting and how did it get its start? Jousting is derived from Old French joster, ultimately from Latin iuxtare, meaning “to approach, to meet.” And “to meet” is exactly what happens in jousting. Jousting is the sport in which two knights fight on horseback while holding heavy lances, with each opponent endeavoring to strike his opponent while riding towards him at high speed, and if possible, breaking the lance on the opponent’s shield or jousting armor, or unhorsing him. The lance was made of wood with a metal tip made of steel or iron and measured between 9 and 14 feet in length. The participants experienced over three times their body weight in G-forces when the lances collided with their armor. The beginnings of jousting did not look like what we imagine today. Originally, there was no divider between the two competitors, and the jousters would run straight at each other with their lances. As one could imagine, this head-to-head combat on horseback led to many injuries and fatalities. However, the introduction of the divider created a more controlled battleground. A list was the field or arena where a jousting event was held and a divider, which was initially just cloth stretching along the center of the field, eventually became a wooden barrier known as the tilt. Jousting started as a form of weapons training that became popular in the Middle Ages as a result of heavy cavalry (armored men on war horses) becoming the primary weapon of the time. First, jousting was simply a way of training knights for battle in a controlled environment. The sport taught new knights horsemanship, accuracy, and how to react in combat. However, what was created as a military training exercise quickly became a popular form of entertainment. The first recorded jousting tournament was said to be arranged by a Frenchman named Godfrey de Preuilly in 1066 and it soon became so popular, the King had to put a limit on how many tournaments could be held, so that not all of the knights would be busy jousting when a real conflict arose. Jousting tournaments were considered highly formal events, and they were planned and arranged months in advance. After gaining the proper royal permits, nobles would challenge their neighboring landowners, and each would choose their best knights to fight. Sometimes, a noble would hire a man to joust who was not a committed knight to their land. These men were called “freelancers,” which is where we get the term today. By the 14th century, jousting became very popular with many members of the nobility, including kings. Jousting was a way to showcase their own skill, courage, and talents, and the sport was just as dangerous for a king as a knight. England’s King Henry VIII suffered a severe injury to his leg when a horse fell on him during a tournament, ending the 44-year-old king’s jousting career and ultimately leaving him with wounds from which he never fully recovered. King Henry II of France was the most famous royal jousting fatality. During a jousting exhibition to celebrate the marriage of his daughter to the king of Spain in 1559, the king received a fatal wound when a sliver of his opponent’s lance broke off and pierced him in the eye. Many aspects of jousting tournaments mirror the sports customs we still have today. For instance, medieval heralds would work similarly to sports journalists of the day, promoting the events and jousters. Many of the best jousters became very famous, like today’s sports heroes. It became such a popular form of entertainment that jousters would travel around on jousting circuits, fighting each other over and over. Knights did not just compete for fame and bragging rights. They often competed for gifts, money, and possibly even land from a grateful noble. My next K4 research blog will be about the cog ship, a type of ship used in the Middle Ages. Don’t miss a thing! Follow the Cardboard Box Adventures blog to join in the fun.
Mumps is a communicable, systemic viral illness most often characterized by parotitis. With widespread use of mumps vaccine in over 114 countries worldwide currently, the disease has become less common. However, due to waning vaccine immunity over time, some disease continues to occur in sporadic outbreaks. A significant number of mumps infections are asymptomatic. EPIDEMIOLOGY AND PATHOGENESIS Humans are the only known natural host of mumps virus, a paramyxovirus closely related to parainfluenza viruses. Mumps is spread by respiratory droplet or through direct contact with saliva. The virus can be isolated from saliva up to 7 days before and through 8 days after parotid swelling. Mumps virus is less contagious than either measles or varicella virus; typically, the incubation period is 16 to 18 days (range, 12–25 days). A live attenuated mumps virus vaccine was first licensed in the United States in 1967. In the prevaccine era, the incidence of mumps was 50 to 251 cases per 100,000. Following implementation of a single-dose vaccine recommendation, the incidence markedly declined to 2 per 100,000 by 1988. After implementation of a 2-dose recommendation for measles-mumps-rubella vaccine in 1989, mumps incidence declined further to 0.1 case per 100,000 by 1999. By 2005, mumps disease rates had declined 99% compared to the prevaccine era. However, over the past decade, periodic outbreaks of mumps disease have continued to occur, especially in adolescents and young adults, many of whom have received either 1 or 2 doses of vaccine as young children. It is estimated that 1 dose of mumps vaccine is 78% (range, 49–92%) effective and 2 doses of vaccine are 88% (range, 66–95%) effective in preventing mumps disease. In susceptible, unimmunized populations, 60% to 70% of cases of mumps are associated with parotitis. However, up to 33% of mumps infections may go unrecognized, especially in adults, because the cases do not have parotitis. Given the number of subclinical cases, information regarding a patient’s history of mumps infection is notoriously inaccurate. Following transmission of mumps virus to a susceptible host, the primary site of viral replication is the epithelium of the upper respiratory or gastrointestinal tract or the eye. The virus then quickly spreads to the local lymphoid tissue, and a primary viremia ensues. The parotid gland, central nervous system (CNS), testis or epididymis, pancreas, and/or ovary may be involved. Inflammation in these infected tissues then leads to characteristic symptoms such parotitis, aseptic meningitis, and/or abdominal pain. A secondary viremia occurs a few days after symptoms of illness begin, indicating viral replication within target organs. Virus can be identified in urine for up to 2 weeks following onset of clinical illness. Lifelong immunity develops in virtually all patients after natural infection, although a second infection has been reported to occur very rarely. A patient with mumps rarely has severe systemic ...
How to Measure EMF How to Measure EMF: The Art of Knowing What You Are Studying There are a lot of people concerned about electromagnetic fields (EMF) and electromagnetic radiation (EMR or also known as radio frequency (RF)). People want to know how to measure EMF. So much so, that they purchase detectors and meters for themselves and attempt to study their own property and homes. It is great to take responsibility for your own concerns! However, there are likely many things that could be confusing your personal emf testing and rf testing assessments. The electromagnetic spectrum is organized by frequency. Generally, lower frequency radiation is on the left, and higher frequency radiation is on the right. See the above graphic. The properties of electromagnetism change at different frequencies and electric and magnetic fields behave differently along the spectrum. A material that is transparent to visible light can be opaque to infrared light, but then again transparent to radio frequency radiation (e.g., glass). Chrome is highly reflective to all of these frequencies, but nearly invisible to IR cameras because it reflects much more heat energy than it emits. So how do you measure the temperature of a chromed device if you cannot attach temperature sensors (thermocouples) to it? Measuring Magnetic Fields Meters measuring magnetic fields strength analyze the flux (changing field properties) of the magnetic field. A meter in motion will not measure this accurately, because measurements will be artificially elevated as you move through a magnetic field. While an accurate quantitative measurement cannot be made while moving, a qualitative measurement can be made to show places where more investigation is recommended. Walking through a building with a magnetic field meter taking real-time measurements is still an effective strategy of finding “hot spots.” Measuring Radio Frequency (RF) Radiation Higher than expected radio frequency (RF) power density can be just as confusing. RF can act sort of like X-rays, penetrating through many materials, reflecting off others. This can produce apparently high power densities in certain areas and make finding the source of those RF radiations more difficult. Using separate RF meters or antennas with the capability of omnidirectional AND more focused targeting can be useful in RF testing. If you find areas that raise your concerns it is best to refer to an expert in the field – much like a medical generalist refers to a specialist. A qualified EMF specialist is trained specifically in the use of high-quality equipment, has area-specific knowledge, and has experience with many different buildings and situations. EMF Testing Services If you bring your EMF concerns to Healthy Building Science, it usually makes our jobs easier by giving us a heads-up as to where to start our investigation and EMF testing. Don’t be surprised if our investigation brings unforeseen conclusions! Often we deliver better than expected news and peace of mind, and no additional measures are recommended or the fix is more simple than you think. Healthy Building Science is an environmental consulting firm which provides EMF testing, emf surveys, emf consultations and RF testing for the greater San Francisco Bay Area and all of Northern California. 2 Responses to “How to Measure EMF” Leave a Reply Who is Healthy Building Science? Environmental Testing Services at HBS - Air Quality Testing - Water Quality Testing - Soil Testing - Asbestos Testing - Lead Testing - Mold Testing - RF Testing – EMF Testing - LEED IAQ Testing - Silica Air Testing (OSHA) - Compliance Testing USP 797 - WELL Building Verification Testing - Environmental Testing - Industrial Hygiene and Compliance - Cleaning, Verification & Coronavirus Testing Sign up for our Quarterly newsletter Sign Up For Newsletter We value your privacy. Your email is never shared or sold.
Let’s start with a little background. Our eyes see reflected light. Daylight cameras, night vision devices, and the human eye all work on the same basic principle: visible light energy hits something and bounces off it, a detector then receives it and turns it into an image. Whether an eyeball, or in a camera, these detectors must receive enough light or they can’t make an image. Obviously, there isn’t any sunlight to bounce off anything at night, so they’re limited to the light provided by starlight, moonlight and artificial lights. If there isn’t enough, they won’t do much to help you see. Thermal Imaging Cameras Thermal imagers are altogether different. In fact, we call them “cameras” but they are really sensors. To understand how they work, the first thing you have to do is forget everything you thought you knew about how cameras make pictures. FLIRs make pictures from heat, not visible light. Heat (also called infrared, or thermal, energy) and light are both parts of the electromagnetic spectrum, but a camera that can detect visible light won’t see thermal energy, and vice versa. Thermal cameras detect more than just heat though; they detect tiny differences in heat – as small as 0.01°C – and display them as shades of grey or with different colors. This can be a tricky idea to get across, and many people just don’t understand the concept, so we’ll spend a little time explaining it. Everything we encounter in our day-to-day lives gives off thermal energy, even ice. The hotter something is the more thermal energy it emits. This emitted thermal energy is called a “heat signature.” When two objects next to one another have even subtly different heat signatures, they show up quite clearly to a FLIR regardless of lighting conditions. Thermal energy comes from a combination of sources, depending on what you are viewing at the time. Some things – warm-blooded animals (including people!), engines, and machinery, for example – create their own heat, either biologically or mechanically. Other things – land, rocks, buoys, vegetation – absorb heat from the sun during the day and radiate it off during the night. Because different materials absorb and radiate thermal energy at different rates, an area that we think of as being one temperature is actually a mosaic of subtly different temperatures. This is why a log that’s been in the water for days on end will appear to be a different temperature than the water, and is therefore visible to a thermal imager. FLIRs detect these temperature differences and translate them into image detail. While all this can seem rather complex, the reality is that modern thermal cameras are extremely easy to use. Their imagery is clear and easy to understand, requiring no training or interpretation. If you can watch TV, you can use a FLIR thermal camera. Night Vision Devices Those greenish pictures we see in the movies and on TV come from night vision goggles (NVGs) or other devices that use the same core technologies. NVGs take in small amounts of visible light, magnify it greatly, and project that on a display. Cameras made from NVG technology have the same limitations as the naked eye: if there isn’t enough visible light available, they can’t see well. The imaging performance of anything that relies on reflected light is limited by the amount and strength of the light being reflected. NVG and other lowlight cameras are not very useful during twilight hours, when there is too much light for them to work effectively, but not enough light for you to see with the naked eye. Thermal cameras aren’t affected by visible light, so they can give you clear pictures even when you are looking into the setting sun. In fact, you can aim a spotlight at a FLIR and still get a perfect picture. Infrared Illuminated (I2) Cameras I2 cameras try to generate their own reflected light by projecting a beam of near-infrared energy that their imager can see when it bounces off an object. This works to a point, but I2 cameras still rely on reflected light to make an image, so they have the same limitations as any other night vision camera that depends on reflected light energy – short range, and poor contrast. All of these visible light cameras – daylight cameras, NVG cameras, and I2 cameras – work by detecting reflected light energy. But the amount of reflected light they receive is not the only factor that determines whether or not you’ll be able to see with these cameras: image contrast matters, too. If you’re looking at something with lots of contrast compared to its surroundings, you’ll have a better chance of seeing it with a visible light camera. If it doesn’t have good contrast, you won’t see it well, no matter how bright the sun is shining. A white object seen against a dark background has lots of contrast. A darker object, however, will be hard for these cameras to see against a dark background. This is called having poor contrast. At night, when the lack of visible light naturally decreases image contrast, visible light camera performance suffers even more. Thermal imagers don’t have any of these shortcomings. First, they have nothing to do with reflected light energy: they see heat. Everything you see in normal daily life has a heat signature. This is why you have a much better chance of seeing something at night with a thermal imager than you do with visible light camera, even a night vision camera. In fact, many of the objects you could be looking for, like people, generate their own contrast because they generate their own heat. Thermal imagers can see them well because they don’t just make pictures from heat; they make pictures from the minute differences in heat between objects. Night vision devices have the same drawbacks that daylight and lowlight TV cameras do: they need enough light, and enough contrast to create usable images. Thermal imagers, on the other hand, see clearly day and night, while creating their own contrast. Without a doubt, thermal cameras are the best 24-hour imaging option.
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! Fever, also called pyrexia, abnormally high body temperature. Fever is a characteristic of many different diseases. For example, although most often associated with infection, fever is also observed in other pathologic states, such as cancer, coronary artery occlusion, and certain disorders of the blood. It also may result from physiological stresses, such as strenuous exercise or ovulation, or from environmentally induced heat exhaustion or heat stroke. Under normal conditions, the temperature of deeper portions of the head and trunk does not vary by more than 1–2 °F in a day, and it does not exceed 99 °F (37.22 °C) in the mouth or 99.6 °F (37.55 °C) in the rectum. Fever can be defined as any elevation of body temperature above the normal level. Persons with fever may experience daily fluctuations of 5–9 °F above normal; peak levels tend to occur in the late afternoon. Mild or moderate states of fever (up to 105 °F [40.55 °C]) cause weakness or exhaustion but are not in themselves a serious threat to health. More serious fevers, in which body temperature rises to 108 °F (42.22 °C) or more, can result in convulsions and death. During fever the blood and urine volumes become reduced as a result of loss of water through increased perspiration. Body protein is rapidly broken down, leading to increased excretion of nitrogenous products in the urine. When the body temperature is rising rapidly, the affected person may feel chilly or even have a shaking chill; conversely, when the temperature is declining rapidly, the person may feel warm and have a flushed moist skin. In treating fever, it is important to determine the underlying cause of the condition. In general, in the case of infection, low-grade fevers may be best left untreated in order to allow the body to fight off infectious microorganisms on its own. However, higher fevers may be treated with acetaminophen or ibuprofen, which exerts its effect on the temperature-regulating areas of the brain. The mechanism of fever appears to be a defensive reaction by the body against infectious disease. When bacteria or viruses invade the body and cause tissue injury, one of the immune system’s responses is to produce pyrogens. These chemicals are carried by the blood to the brain, where they disturb the functioning of the hypothalamus, the part of the brain that regulates body temperature. The pyrogens inhibit heat-sensing neurons and excite cold-sensing ones, and the altering of these temperature sensors deceives the hypothalamus into thinking the body is cooler than it actually is. In response, the hypothalamus raises the body’s temperature above the normal range, thereby causing a fever. The above-normal temperatures are thought to help defend against microbial invasion because they stimulate the motion, activity, and multiplication of white blood cells and increase the production of antibodies. At the same time, elevated heat levels may directly kill or inhibit the growth of some bacteria and viruses that can tolerate only a narrow temperature range. Learn More in these related Britannica articles: human disease: Disease: signs and symptoms…own; individual signs such as fever, however, may be found in a great number of diseases. Some of the common manifestations of disease—as they relate to an imbalance of normal homeostasis—are taken up in this section. They are covered more at length in the article diagnosis.… immune system: Acute-phase response…of the body, causing the fever that often accompanies infection. (The interleukins increase body temperature by acting on the temperature-regulating hypothalamus in the brain and by affecting energy mobilization by fat and muscle cells.) Fever is believed to be helpful in eliminating infections because most bacteria grow optimally at temperatures… analgesic: Anti-inflammatory analgesics…aspirin is effective in reducing fever, as well as relieving inflammation, acetaminophen and NSAIDs are more potent antipyretic (fever-reducing) analgesics. Acetaminophen, on the other hand, possesses inferior anti-inflammatory activity compared with aspirin and NSAIDs and thus is relatively ineffective in treating inflammatory conditions such as rheumatoid arthritis. Despite this, acetaminophen…
The term “myelodysplasia” comes from the Greek words: “myelo”, ie blood cells, and “dysplasia”, ie abnormal development or growth. Myelodysplastic syndromes (MDS) are a heterogeneous group of blood pathologies where bone marrow cells can not become perfect and healthy cells but stop at an immature stage. The bone marrow therefore fails to produce enough red blood cells, white blood cells or healthy platelets. These cells that remain at an immature stage are called “blasts” The incidence of this disease in Europe is about 8 people per 100,000 inhabitants. It is a disease affecting mainly elderly people: over 70, people affected by MDS are about 35 people per 100,000 inhabitants. Most experts agree that myelodysplastic syndromes are a type of bone and bone marrow cancer that can occur in either chronic or aggressive form. It is not a hereditary or contagious pathology. Myelodysplastic syndromes are defined as “clonal” diseases because the development of the disease is due to a single cell that, escaping control mechanisms, multiplies which results in the production of altered cells in form and function that receive the “defect” from the cell progenitor. The symptoms and the development change considerably from patient to patient, depending on the type of affected blood cell: there may be anemia (caused by red blood cell reduction), neutropenia (when neutrophils decrease) and / or thrombocytopenia (ie platelet reduction). In the most serious cases MDS can evolve in acute myeloid leukemia.
Video of the Day Approximately 3,000 species of snakes inhabit divergent habitats on every continent, except Antarctica. With their silent, slithering locomotion and volatile defense reactions, snakes generate somewhat disproportionate levels of fear in many people. However, most snakes prefer to avoid human interaction, content to focus on their own lives -- basking in the sun, digesting a nice meal and, of course, bringing more snakes into the world. The advent of sexual maturity in snakes depends as much on their size, nutrition and overall health as it does on age. On average, most healthy, well-fed snakes reach sexual maturity between 2 to 3 years old. There are exceptions; for example, Burmese pythons typically reach sexual maturity between 4 to 5 years of age, and black rat snakes aren't ready for parenthood until they are at least 7 years old, some females waiting as long as 10 years to reproduce. Types of Reproduction Snakes produce offspring in three ways. Oviparous snakes lay between 2 to 50 eggs in a clutch. Females incubate their eggs either by burying them or by wrapping their bodies around the clutch to keep them warm. Viviparous snakes give birth to live young. Ovoviviparous females don't expel their eggs. Their eggs hatch inside their bodies and they give birth to live young. Most snakes have a specific breeding season that's activated by environmental factors, such as temperature, sunlight, food availability and rainfall. Laying eggs and carrying live young is physically taxing, so only females in good health with energy reserves are willing to reproduce. Snakes inhabiting cooler environments typically mate soon after emerging from hibernation in early spring, ensuring that their young are born during the warm summer months. However, for snakes inhabiting tropical regions, mating can occur year-round. During the breeding season, male snakes become aggressive and competitive as they search for mates. They fight with each other in grand shows of physical strength and dominance in order to gain the attention of females. In the end, however, the decision to mate is entirely up to the female snake. She may take just a few minutes to decide, or she may keep her suitors hanging for several days before her decision is reached. Not ones for sentiment, once mating has occurred male and female snakes part company. Males are typically focused on finding other females, and besides, females can become agitated and aggressive if the males hang around after mating. Oviparous females find a safe place to hide their eggs and incubate them, but that's the extent of their maternal care. Snakelets are left to hatch and fend for themselves, as are snakelets birthed by viviparous and ovoviviparous females. Newborn snakelets are fully formed and prepared to venture out on their own with no parental guidance. According to a study conducted by scientists at the University of Tulsa in Oklahoma, snakes are capable of asexual reproduction, or parthenogenesis. Previously thought to be a phenomenon specific to captive animals, the study revealed that wild female snakes, specifically copperheads (Agkistrodon contortrix) and cottonmouths (Agkistrodon piscivorus) can produce healthy offspring without mating. - Ryan McVay/Photodisc/Getty Images
Written by Tyler Wilson. Dr. Ethan Elliott, who graduated St. Mary’s College of Maryland (SMCM) in 2006, received an award from NASA in October 2019 for his work in NASA’s cold atom lab and being part of the team that generated, according to insideSMCM, “the first Bose-Einstein Condensate in Earth orbit.” The creation of the Bose-Einstein Condensate (BEC) in a dilute atomic gas, something even Einstein did not think would be possible, is not only a triumph of the human mind, but is a product of the 21st century technologies that scientists now have at their disposal. In introductory physics, students are taught about the three main states of matter: liquid, solid and gas. While these states of matter each have their own unique properties, the atoms that make up each of these states of matter will collide and then go their separate ways if they run into each other. The atoms are showing particle behavior by bouncing off each other, which is how atoms normally behave. According to Elliott, atomic BECs are “massive, neutral atoms cooled so close to absolute zero that they become a collection of large matter waves.” If an atom is cooled to an extreme degree, like a BEC is, then they exhibit a more “wavelike behavior,” meaning that instead of bouncing off each other, they are capable of “passing through each other.” This is because cooling particles lowers their momentum, which in turn increases their wavelike behavior. Naturally low momentum occurs for particles with low mass, such as electrons, but through Elliott and his team’s work, they were able to “very unnaturally” manipulate atoms, which are “many thousands times more massive than electrons,” into exhibiting that wavelike behavior. In order to actually generate the BEC, Elliott and his team created a lab in the International Space Station that turned on when all the astronauts were asleep. This is because “the most advanced cooling techniques are improved by microgravity” so that a free falling BEC exists for longer periods of time in space. Setting up the space lab was a long and arduous process that required patience and ruggedizing advanced technology. It functions by releasing rubidium and potassium atoms into a vacuum chamber that are precooled with lasers and loaded into a magnetic trap where the atoms evaporate to a billionth of a degree above absolute zero. A final laser pulse casts a shadow of the BEC on an onboard camera and sends a picture back to NASA’s Jet Propulsion Laboratory. This entire process would not have been possible without decades of advances in modern experimental physics, which is an amazing set of tools to deploy in space. The discovery of BECs has many fascinating applications. First of all, as Elliott explains, “everything around us is made of atoms,” so we can study these atoms “for their intrinsic properties and fundamental science.” Besides learning about fundamental science, ultracold atoms can be used to model systems that we don’t have access to, such as the “interior of a neutron star” or the “the quark gluon plasma of the big bang.”Lastly, since the BECs have mass, they can be “extremely sensitive probes of inertial forces: rotations, accelerations, or gravity” which is needed because as Elliott says, understanding gravitational effects can offer insights into many poorly understood areas of physics such as the nature of “dark matter” or “dark energy.” Elliott said his education at SMCM is “directly responsible for preparing [him] to do this work.” He worked at the PAX River Naval Air Station with SMCM physics professor Dr. Charles Adler, and there he “first started working with ultracold atoms,” where the Navy was interested in using “the inertial sensing properties of ultracold atoms to create a new generation of gyroscopes for dead reckoning navigation in GPS denied environments.” Elliott claims that summer research experience, such as he received at SMCM, is “the single biggest determiner in graduate school admission when applying to a PhD program in a scientific field.”