content
stringlengths
275
370k
An explosion is a rapid increase in volume and release of energy in an extreme manner, usually with the generation of high temperatures and the release of gases. Supersonic explosions created by high explosives are known as detonations and travel via supersonic shock waves. Subsonic explosions are created by low explosives through a slower burning process known as deflagration. When caused by a man-made device such as an exploding rocket or firework, the audio component of an explosion is referred to as its "report" (which can also be used as a verb, i.e., "the rocket reported loudly upon impact".) - 1 Causes - 2 Properties of explosions - 3 Notable explosions - 4 See also - 5 References Explosions can occur in nature. Most natural explosions arise from volcanic processes of various sorts. Explosive volcanic eruptions occur when magma rising from below has much dissolved gas in it; the reduction of pressure as the magma rises causes the gas to bubble out of solution, resulting in a rapid increase in volume. Explosions also occur as a result of impact events and in phenomena such as hydrothermal explosions (also due to volcanic processes). Explosions can also occur outside of Earth in the universe in events such as supernovae. Explosions frequently occur during bushfires in eucalyptus forests where the volatile oils in the tree tops suddenly combust. Among the largest known explosions in the universe are supernovae, which result when a star explodes from the sudden starting or stopping of nuclear fusion, and gamma ray bursts, whose nature is still in some dispute. Solar flares are an example of explosion common on the Sun, and presumably on most other stars as well. The energy source for solar flare activity comes from the tangling of magnetic field lines resulting from the rotation of the Sun's conductive plasma. Another type of large astronomical explosion occurs when a very large meteoroid or an asteroid impacts the surface of another object, such as a planet. The most common artificial explosives are chemical explosives, usually involving a rapid and violent oxidation reaction that produces large amounts of hot gas. Gunpowder was the first explosive to be discovered and put to use. Other notable early developments in chemical explosive technology were Frederick Augustus Abel's development of nitrocellulose in 1865 and Alfred Nobel's invention of dynamite in 1866. Chemical explosions (both intentional and accidental) are often initiated by an electric spark or flame. Accidental explosions may occur in fuel tanks, rocket engines, etc. Electrical and magnetic A high current electrical fault can create an electrical explosion by forming a high energy electrical arc which rapidly vaporizes metal and insulation material. This arc flash hazard is a danger to persons working on energized switchgear. Also, excessive magnetic pressure within an ultra-strong electromagnet can cause a magnetic explosion. Mechanical and vapor Strictly a physical process, as opposed to chemical or nuclear, e.g., the bursting of a sealed or partially sealed container under internal pressure is often referred to as a 'mechanical explosion'. Examples include an overheated boiler or a simple tin can of beans tossed into a fire. Boiling liquid expanding vapor explosions are one type of mechanical explosion that can occur when a vessel containing a pressurized liquid is ruptured, causing a rapid increase in volume as the liquid evaporates. Note that the contents of the container may cause a subsequent chemical explosion, the effects of which can be dramatically more serious, such as a propane tank in the midst of a fire. In such a case, to the effects of the mechanical explosion when the tank fails are added the effects from the explosion resulting from the released (initially liquid and then almost instantaneously gaseous) propane in the presence of an ignition source. For this reason, emergency workers often differentiate between the two events. In addition to stellar nuclear explosions, a man-made nuclear weapon is a type of explosive weapon that derives its destructive force from nuclear fission or from a combination of fission and fusion. As a result, even a nuclear weapon with a small yield is significantly more powerful than the largest conventional explosives available, with a single weapon capable of completely destroying an entire city. Properties of explosions Explosive force is released in a direction perpendicular to the surface of the explosive. If the surface is cut or shaped, the explosive forces can be focused to produce a greater local effect; this is known as a shaped charge. ||This article is written like a personal reflection or opinion essay rather than an encyclopedic description of the subject. (May 2013)| The speed of the reaction is what distinguishes the explosive reaction from an ordinary combustion reaction . Unless the reaction occurs rapidly, the thermally expanded gases will be dissipated in the medium, and there will be no explosion. Again, consider a wood or coal fire. As the fire burns, there is the evolution of heat and the formation of gases, but neither is liberated rapidly enough to cause an explosion. This can be likened to the difference between the energy discharge of a battery, which is slow, and that of a flash capacitor like that in a camera flash, which releases its energy all at once. Evolution of heat The generation of heat in large quantities accompanies most explosive chemical reactions. The exceptions are called entropic explosives and include organic peroxides such as acetone peroxide It is the rapid liberation of heat that causes the gaseous products of most explosive reactions to expand and generate high pressures. This rapid generation of high pressures of the released gas constitutes the explosion. The liberation of heat with insufficient rapidity will not cause an explosion. For example, although a unit mass of coal yields five times as much heat as a unit mass of nitroglycerin, the coal cannot be used as an explosive because the rate at which it yields this heat is quite slow. In fact, a substance which burns less rapidly (i.e. slow combustion) may actually evolve more total heat than an explosive which detonates rapidly (i.e. fast combustion). In the former, slow combustion converts more of the internal energy (i.e. chemical potential) of the burning substance into heat released to the surroundings, while in the latter, fast combustion (i.e. detonation) instead converts more internal energy into work on the surroundings (i.e. less internal energy converted into heat); c.f. heat and work (thermodynamics) are equivalent forms of energy. See Heat of Combustion for a more thorough treatment of this topic. When a chemical compound is formed from its constituents, heat may either be absorbed or released. The quantity of heat absorbed or given off during transformation is called the heat of formation. Heats of formations for solids and gases found in explosive reactions have been determined for a temperature of 25 °C and atmospheric pressure, and are normally given in units of kilojoules per gram-molecule. A negative value indicates that heat is absorbed during the formation of the compound from its elements; such a reaction is called an endothermic reaction. In explosive technology only materials that are exothermic—that have a net liberation of heat—are of interest. Reaction heat is measured under conditions either of constant pressure or constant volume. It is this heat of reaction that may be properly expressed as the "heat of explosion." Initiation of reaction A chemical explosive is a compound or mixture which, upon the application of heat or shock, decomposes or rearranges with extreme rapidity, yielding much gas and heat. Many substances not ordinarily classed as explosives may do one, or even two, of these things. A reaction must be capable of being initiated by the application of shock, heat, or a catalyst (in the case of some explosive chemical reactions) to a small portion of the mass of the explosive material. A material in which the first three factors exist cannot be accepted as an explosive unless the reaction can be made to occur when needed. Fragmentation is the accumulation and projection of particles as the result of a high explosives detonation. Fragments could be part of a structure such as a magazine. High velocity, low angle fragments can travel hundreds or thousands of feet with enough energy to initiate other surrounding high explosive items, injure or kill personnel and damage vehicles or structures. - Nanaimo mine explosion 1887 - Halifax Explosion 1917 - Battle of Messines 1917 - Oppau explosion, Ludwigshafen, Germany 1921 - Bombay Explosion (1944) - Port Chicago disaster 1944 - RAF Fauld explosion 1944 - Cádiz Explosion 1947 - Texas City Disaster 1947 - Nedelin catastrophe 1960 - Soviet N1 rocket explosion 1969 - Flixborough disaster 1974 - PEPCON disaster, Henderson, Nevada 1988 - AZF (factory), Toulouse, France 2001 - Ryongchon disaster 2004 - 2005 Hertfordshire Oil Storage Terminal fire 2005 - Albania explosion Gerdec 2008 - Cataño oil refinery fire 2009 Use in war - Artillery, mortars, and cannons - Gunpowder and smokeless powder as a propellant in firearms and artillery - Missiles, rockets, and torpedoes - Atomic bombings of Hiroshima and Nagasaki - Land mines, naval mines, and IEDs - Satchel charges and sapping - Hand grenades - Mount St. Helens - Mount Tambora - Mount Pinatubo - Toba catastrophe theory - Yellowstone Caldera |Look up explosion in Wiktionary, the free dictionary.| |Look up explode in Wiktionary, the free dictionary.| - Dust explosion - Explosion protection - Explosive limit - Fuel tank explosion - Implosion (mechanical process) - Internal combustion engine - List of unexplained explosion events - Mushroom cloud - Piston engine - Standards for electrical equipment in potentially explosive environments - Underwater explosion - Kissane, Karen (2009-05-22). "Fire power equalled 1500 atomic bombs". The Age (Melbourne). - Dubnikova, Faina; Kosloff, Ronnie; Almog, Joseph; Zeiri, Yehuda; Boese, Roland; Itzhaky, Harel; Alt, Aaron; Keinan, Ehud (2005-02-01). "Decomposition of Triacetone Triperoxide Is an Entropic Explosion". Journal of the American Chemical Society 127 (4): 1146–1159. doi:10.1021/ja0464903. PMID 15669854.
Why Africana History? by Dr. John Henrik Clarke Africa and its people are the most written about and the least understood of all of the world's people. This condition started in the 15th and the 16th centuries with the beginning of the slave trade system. The Europeans not only colonialized most of the world, they began to colonialize information about the world and its people. In order to do this, they had to forget, or pretend to forget, all they had previously known abut the Africans. They were not meeting them for the first time; there had been another meeting during Greek and Roman times. At that time they complemented each other. The African, Clitus Niger, King of Bactria, was also a cavalry commander for Alexander the Great. Most of the Greeks' thinking was influenced by this contact with the Africans. The people and the cultures of what is known as Africa are older than the word "Africa." According to most records, old and new, Africans are the oldest people on the face of the earth. The people now called Africans not only influenced the Greeks and the Romans, they influenced the early world before there was a place called Europe. When the early Europeans first met Africans, at the crossroads of history, it was a respectful meeting and the Africans were not slaves. Their nations were old before Europe was born. In this period of history, what was to be later known as "Africa" was an unknown place to the people who would someday be called, "Europeans." Only the people of some of the Mediterranean Islands and a few states of what would become the Greek and Roman areas knew of parts of North Africa, and that was a land of mystery. After the rise and decline of Greek civilization and the Roman destruction of the city of Carthage, they made the conquered territories into a province which they called Africa, a word derived from "afri" and the name of a group of people about whom little is known. At first the word applied only to the Roman colonies in North Africa. There was a time when all dark-skinned people were called Ethiopians, for the Greeks referred to Africa as, "The Land Of The Burnt-Face People." If Africa, in general, is a man-made mystery, Egypt, in particular, is a bigger one. There has long been an attempt on the part of some European "scholars" to deny that Egypt was a part of Africa. To do this they had to ignore the great masterpieces on Egyptian history written by European writers such as, Ancient Egypt. Light of the World, Vols. I & II, and a whole school of European thought that placed Egypt in proper focus in relationship to the rest of Africa. The distorters of African history also had to ignore the fact that the people of the ancient land which would later be called Egypt, never called their country by that name. It was called, Ta-Merry or Kampt and sometimes Kemet or Sais. The ancient Hebrews called it Mizrain. Later the Moslem Arabs used the same term but later discarded it. Both the Greeks and the Romans referred to the country as the "Pearl Of The Nile." The Greeks gave it the simple name, Aegyptcus. Thus the word we know as Egypt is of Greek Origin. Until recent times most Western scholars have been reluctant to call attention to the fact that the Nile River is 4,000 miles long. It starts in the south, in the heart of Africa, and flows to the north. It was the world's first cultural highway. Thus Egypt was a composite of many African cultures. In his article, "The Lost Pharaohs of Nubia," Professor Bruce Williams infers that the nations in the South could be older than Egypt. This information is not new. When rebel European scholars were saying this 100 years ago, and proving it, they were not taken seriously. It is unfortunate that so much of the history of Africa has been written by conquerors, foreigners, missionaries and adventurers. The Egyptians left the best record of their history written by local writers. It was not until near the end of the 18th century when a few European scholars learned to decipher their writing that this was understood. The Greek traveler, Herodotus, was in Africa about 450 B.C. His eyewitness account is still a revelation. He witnessed African civilization in decline and partly in ruins, after many invasions. However, he could still see the indications of the greatness that it had been. In this period in history, the Nile Valley civilization of Africa had already brought forth two "Golden Ages" of achievement and had left its mark for all the world to see. Slavery and colonialism strained, but did not completely break, the cultural umbilical cord between the Africans in Africa and those who, by forced migration, now live in what is called the Western World. A small group of African-American and Caribbean writers, teachers and preachers, collectively developed the basis of what would be an African Consciousness movement over 100 years ago. Their concern was with African, in general, Egypt and Ethiopia, and what we now call the Nile Valley. In approaching this subject, I have given preference to writers of African descent who are generally neglected. I maintain that the African is the final authority on Africa. In this regard I have reconsidered the writings of W.E.B. DuBois, George Washington Williams, Drusilla Dungee Houston, Carter G. Woodson, Willis N. Huggins, and his most outstanding living student, John G. Jackson. I have also re-read the manuscripts of some of the unpublished books of Charles C. Seifert, especially manuscripts of his last completed book, Who Are The Ethiopians? Among Caribbean scholars, like Charles C. Seifert, J.A. Rogers (from Jamaica) is the best known and the most prolific. Over 50 years of his life was devoted to documenting the role of African personalities in world history. His two-volume work, World's Great Men of Color, is a pioneer work in the field. Among the present-day scholars writing about African history, culture and politics, Dr. Yosef ben-Jochannan's books are the most challenging. I have drawn heavily on his research in the preparation of this article. He belongs to the main cultural branch of the African world, having been born in Ethiopia, growing to early manhood in the Caribbean Islands and having lived in the African-American community of the United States for over 20 years. His major books on African history are: Black Man of the Nile, 1979, Africa: Mother of Western Civilization, 1976, and The African Origins of Major Western Religions, 1970. Our own great historian, W.E.B. DuBois tells us, "Always Africa is giving us something new . . . On its black bosom arose one of the earliest, if not the earliest, of self-protecting civilizations, and grew so mightily that it still furnishes superlatives to thinking and speaking men. Out of its darker and more remote forest vastness came, if we may credit many recent scientists, the first welding of iron, and we know that agriculture and trade flourished there when Europe was a wilderness." Dr. DuBois tells us further that, "Nearly every human empire that has arisen in the world, material and spiritual, has found some of its greatest crises on this continent of Africa. It was through Africa that Christianity became the religion of the world . . . It was through Africa that Islam came to play its great role of conqueror and civilizer." Egypt and the nations of the Nile Valley were, figuratively, the beating heart of Africa and the incubator for its greatness for more than a thousand years. Egypt gave birth to what later would become known as "Western Civilization," long before the greatness of Greece and Rome. This is a part of the African story, and in the distance it is a part of the African-American story. It is difficult for depressed African-Americans to know that they are a part of the larger story of the history of the world. The history of the modern world was made, in the main, by what was taken from African people. Europeans emerged from what they call their "Middle-Ages," people-poor, land-poor and resources-poor. And to a great extent, culture-poor. They raided and raped the cultures of the world, mostly Africa, and filled their homes and museums with treasures, then they called the people primitive. The Europeans did not understand the cultures of non-Western people then; they do not understand them now. History, I have often said, is a clock that people use to tell their political time of day. It is also a compass that people use to find themselves on the map of human geography. History tells a people where they have been and what they have been. It also tells a people where they are and what they are. Most importantly, history tells a people where they still must go and what they still must be. There is no way to go directly to the history of African-Americans without taking a broader view of African world history. In his book, Tom-Tom, the writer John W. Vandercook makes this meaningful statement: A race is like a man. Until it uses its own talents, takes pride in its own history, and loves its own memories, it can never fulfill itself completely. This, in essence, is what African-American history and what African-American History Month is about. The phrase African-American or African-American History Month, taken at face value and without serious thought, appears to be incongruous. Why is there a need for an African-American History Month when there is no similar month for the other minority groups in the United States. The history of the United States, in total, consists of the collective histories of minority groups. What we call 'American civilization' is no more than the sum of their contributions. The African- Americans are the least integrated and the most neglected of these groups in the historical interpretation of the American experience. This neglect has made African-American History Month a necessity. Most of the large ethnic groups in the United States have had, and still have, their historical associations. Some of these associations predate the founding of the Association For The Study of Negro Life and History (1915). Dr. Charles H. Wesley tells us that, "Historical societies were organized in the United States with the special purpose in view of preserving and maintaining the heritage of the American nation." Within the framework of these historical societies, many ethnic groups, Black as well as white, engaged in those endeavors that would keep alive their beliefs in themselves and their past as a part of their hopes for the future. For African-Americans, Carter G. Woodson led the way and used what was then called, Negro History Week, to call attention to his people's contribution to every aspect of world history. Dr. Woodson, then Director of the Association For the Study of Negro Life and History, conceived this special week as a time when public attention should be focused on the achievements of America's citizens of African descent. The acceptance of the facts of African-American history and the African-American historian as a legitimate part of the academic community did not come easily. Slavery ended and left its false images of Black people intact. In his article, "What the Historian Owes the Negro," the noted African-American historian, Dr. Benjamin Quarles, says: "The Founding Fathers, revered by historians for over a century and a half, did not conceive of the Negro as part of the body of politics. Theoretically, these men found it hard to imagine a society where Negroes were of equal status to whites. Thomas Jefferson, third President of the United States, who was far more liberal than the run of his contemporaries, was never the less certain that "the two races, equally free, cannot live in the same government." I have been referring to the African origin of African-American literature and history. This preface is essential to every meaningful discussion of the role of the African-American in every aspect of American life, past and present. I want to make it clear that the Black race did not come to the United States culturally empty-handed. The role and importance of ethnic history is in how well it teaches a people to use their own talents, take pride in their own history and love their own memories. In order to fulfill themselves completely, in all of their honorable endeavors it is important that the teacher of history of the Black race find a definition of the subject, and a frame of reference that can be understood by students who have no prior knowledge of the subject. The following definition is paraphrased from a speech entitled, "The Negro Writer and His Relation To His Roots," by Saunders Redding, (1960): Heritage, in essence, is how a people have used their talent to created a history that gives them memories that they can respect, and use to command the respect of other people. The ultimate purpose of history and history teaching is to use a people's talent to develop an awareness and a pride in themselves so that they can create better instruments for living together with other people. This sense of identity is the stimulation for all of a people's honest and creative efforts. A people's relationship to their heritage is the same as the relationship of a child to its mother. I repeat: History is a clock that people use to tell their time of day. It is a compass that they use to find themselves on the map of human geography. It also tells them where they are, and what they are. Most importantly, an understanding of history tells a people where they still must go, and what they still must be. Early white American historians did not accord African people anywhere a respectful place in their commentaries on the history of man. In the closing years of the nineteenth century, African- American historians began to look at their people's history from their vantage point and their point of view. Dr. Benjamin Quarks observed that "as early as 1883 this desire to bring to public attention the untapped material on the Negro prompted George Washington Williams to publish his two-volume History of The Negro Race in America from 1619 to 1880. The first formally trained African-American historian was W.E.B. DuBois, whose doctoral dissertation, published in 1895, The Suppression Of The African Slave Trade To The United States, 1638-1870, became the first title to be published in the Harvard Historical Studies. It was with Carter G. Woodson, another Ph.D., that African world history took a great leap forward and found a defender who could document his claims. Woodson was convinced that unless something was done to rescue the Black man from history's oversight, he would become a "negligible factor in the thought of the world. " Woodson, in 1915, founded the Association for the Study of Negro Life and History. Woodson believed that there was no such thing as, "Negro History. " He said what was called "Negro History" was only a missing segment of world history. He devoted the greater portion of his life to restoring this segment. Africa came into the Mediterranean world, mainly through Greece, which had been under African influence, and then Africa was cut off from the melting pot by the turmoil among the Europeans and the religious conquests incident to the rise of Islam. Africa, prior to these events, had developed its history and civilization, indigenous to its people and lands. Africa came back into the general picture of history through the penetration of North Africa, West Africa and the Sudan by the Arabs. European and American slave traders next ravaged the continent. The imperialist colonizers and missionaries finally entered the scene and prevailed until the recent re-emergence of independent African nations. Africans are, of course, closely connected to the history of both North and South America. The African-American's role in the social, economic and political development of the American states is an important foundation upon which to build racial understanding, especially in areas in which false generalization and stereotypes have been developed to separate peoples rather than to unite them. Contrary to a misconception which still prevails, the Africans were familiar with literature and art for many years before their contact with the Western World. Before the breaking-up of the social structure of the West African states of Ghana, Mali and Songhay and the internal strife and chaos that made the slave trade possible, the forefathers of the Africans who eventually became slaves in the United States, lived in a society where university life was fairly common and scholars were held in reverence. To understand fully any aspect of African-American life, one must realize that the African-American is not without a cultural past, though he was many generations removed from it before his achievements in American literature and art commanded any appreciable attention. Africana, or Black History, should be taught every day, not only in the schools, but also in the home. African History Month should be every month. We need to learn about all the African people of the world, including those who live in Asia and the islands of the Pacific. In the twenty-first century there will be over one billion African people in the world. We are tomorrow's people. But, of course, we were yesterday's people, too. With an understanding of our new importance we can change the world, if first we change ourselves. The late Dr. John Henrik Clarke, a pre-eminent African-American historian, author of several volumes on the history of Africa and the Diaspora, taught inthe Department of Black and Puerto Rican Studies at Hunter College of the City University of New York. Originally published in THE BLACK COLLEGIAN Magazine (1997).
Exploration for oil and natural gas begins with examining the surface and sub-surface structure of the earth, and determining where it is likely that oil and natural gas deposits might exist. It was learned in the mid 1800's that anticlinal slopes had an increased chance of containing deposits. Anticlinal slopes are areas where the earth has folded up on itself, forming a dome shape that is characteristic of a great number of oil and natural gas reservoirs. The geologist gathers his information from variety of sources, including outcroppings of rocks on the surface, or in valleys and gorges; geologic information attained from rock samples from irrigation ditches and water wells; and from other oil and natural gas wells. Combining this information allows the geologist to make inferences as to the fluid content, porosity, permeability, age, and formation sequence of the rocks underneath the surface of a particular area. Seismology refers to the study of how energy, in the form of seismic waves, moves through the Earth's crust and interacts differently with various types of underground formations. In 1855, L. Palmyra developed the first seismograph - an instrument used to detect and record earthquakes. This device was able to pick up and record the vibrations of the earth that occur during an earthquake. However, it was not until 1921 that this technology was applied to the oil and natural gas industry and used to help locate underground reservoirs. The basic concept of seismology is quite simple. The Earth's crust is composed of different layers, each with its own properties. Sound waves traveling underground interact differently with each of these layers. Scientists and engineers are able to artificially generate seismic waves and record data as they travel through the earth and are reflected back toward the source by different underground layers and formations. Just as a ball bouncing on concrete behaves differently than one bouncing on soft ground, seismic waves sent underground reflect from dense layers of rock differently than from porous layers of rock. These differences allow the geologist to characterize underground geologic features. While the actual practice of seismic exploration is quite a bit more complicated and technical, these are the basic concepts that apply. In onshore seismic exploration, seismic waves may be produced by dynamite detonated several feet below the ground surface. However, due to environmental concerns and improved technology, seismic crews increasingly use non-explosive seismic technology. This usually consists of large, heavy, wheeled or tracked vehicles carrying special equipment designed to create a large impact or series of vibrations. These impacts or vibrations create seismic waves similar to those created by dynamite. In the seismic truck shown, the large piston in the middle is used to create vibrations on the surface of the earth, sending seismic waves deep below ground. Sensitive instruments called geophones are used at the surface to record reflected waves and transmit the data to seismic trucks for later analysis. A significant amount of our oil and natural gas is produced from wells located offshore. The reservoirs from which this oil and natural gas is produced may lie below thousands of feet of ocean water and additional thousands of feet underground below the ocean floor. To locate these reservoirs, seismic exploration techniques are also used. However, a slightly different approach must be taken from that used in onshore exploration To generate seismic waves, large air guns that release bursts of compressed air underwater are used. The air guns create seismic waves that are transmitted through the water and then through the earth's crust where seismic reflections are generated from underground surfaces just as in onshore exploration. To pick up the seismic reflections and generate seismic data, instead of geophones the devices that serve that purpose are called hydrophones. They are towed behind the ship in various configurations depending on the needs of the geophysicist. In addition to using seismology to gather data concerning the Earth's crust, the magnetic properties of underground formations can be measured to generate geological and geophysical data. This is accomplished through the use of magnetometers, devices that measure the small differences in the Earth's magnetic field. In the early days of magnetometers, the devices were large and bulky, and only able to survey a small area at a time. But in 1981, NASA launched a satellite equipped with magnetometer technology. This satellite, called Magsat, allows for the study of underground rock formations and the Earth's mantle on a continental scale, and provides clues as to tectonic plate movement and the location of deposits of petroleum, natural gas, and other valuable minerals. Different underground formations and rock types all have a slightly different effect on the gravitational field that surrounds the Earth. Geophysicists can measure these minute differences using very sensitive equipment called gravimeters. They can analyze this measurement data to gain additional insight into the types of underground formations and whether those formations have the potential for containing oil and natural gas reservoirs. The drilling of an exploration or appraisal well is the first opportunity that a geologist or engineer has to examine the actual contents of the subsurface geology. Logging refers to inspections and tests performed during or after the well-drilling process that allow companies to both monitor the progress of the drill and gain a better picture of specific subsurface formations. In addition to information specific to a particular well, vast archives of historical drilling logs are available for use by geologists interested in the geologic features of a given, or similar, area. Logging is essential during the drilling process. Monitoring drill logs helps ensure that the correct drilling equipment is being used and that drilling is discontinued if unfavorable conditions develop. There are a variety of logging tests that can be performed that illuminate the true composition and characteristics of different layers of rock that the drill passes through. Details regarding the various types of logging tests that can be performed are too numerous to detail here. Various types of tests include: standard, electric, acoustic, radioactivity, density, induction, caliper, directional and nuclear logging, to name but a few. However, two of the most commonly performed tests are standard logging and electric logging. Standard logging consists of examining and recording the physical aspects of a well. For example, the drill cuttings (rock that is displaced by the drilling of the well) are examined and recorded, allowing geologists to physically examine the subsurface rock. Also, core samples are taken, which consists of lifting samples of underground rock through which the drill passes intact to the surface, allowing the various layers of rock and their thicknesses, to be examined. These cuttings and cores are often examined using powerful microscopes, which can magnify the rock up to 2000 times. This allows the geologist to examine the porosity and fluid content of the subsurface rock, and to gain a better understanding of the earth in which the well is being drilled. Electric logging consists of lowering a device through the 'down hole' portion of the well to measure the electrical resistance of rock layers. This is done by running an electrical current through the rock formations and measuring the resistance along the way. This gives geologists an idea of the content and characteristics of fluids found in the rock. A newer version of electric logging, called induction electric logging, provides much the same types of readings but is more easily performed and provides data that is more easily interpreted. Raw data from logging work would be useless without careful and methodical interpretation. An example of the data obtained through various forms of logging is shown here. In this representation, the different columns indicate the results of different types of tests. The data that appear as 'squiggly' lines on the well data readout is interpreted by experienced geologists, geophysicists, or petroleum engineers. Much like putting together a puzzle, the geophysicist uses all data available to create a model, or educated guess, regarding the structure of the layers of rock underground. Some techniques, including seismic exploration, lend themselves well to the construction of a hand or computer generated visual interpretation of underground formations. Other sources of data, such as that obtained from core samples or logging, are taken into account by the geologist when determining the subsurface geological structures. It must be noted, however, that despite the amazing evolution of technology and exploration techniques, the only way of being sure that a reservoir exists is to drill an exploratory well. Geologists and geophysicists can make their best guesses as to the location of reservoirs, but they are not infallible. 2-D Seismic Interpretation Two-dimensional seismic imaging refers to the use of data collected from seismic exploration activities to develop a cross-sectional picture of the underground rock formations. The geophysicist interprets seismic data, taking the vibration recordings of the seismograph and using them to develop a conceptual model of the composition and thickness of the various layers of rock underground. This process is normally used to map underground formations and to make estimates based on those geologic structures to determine where it is likely that oil and natural gas deposits may exist. A related technique using basic seismic data is known as "direct detection". In the mid-1970's, it was discovered that white bands, called 'bright spots', often appeared on seismic recording strips. It was determined that these bright spots could indicate deposits of hydrocarbons and resulted from the nature of the porous rock containing oil and natural gas, which often created stronger seismic reflections than water-filled rock. Therefore, in these circumstances, actual reservoirs could sometimes be detected directly from the seismic data. However, this phenomenon does not apply consistently since many bright spots do not contain hydrocarbons, and many deposits of hydrocarbons are not indicated by white strips on the seismic data. Therefore, although adding a new dimension to help in locating potential reservoirs, direct detection is not a completely reliable method One of the greatest innovations in the history of oil and natural gas exploration is the use of computers to compile and assemble vast amounts of geologic data into a coherent 'map' of the underground. Use of this computer technology is referred to as computer-assisted exploration (CAEX). For more information on geology in general, visit the United States Geological Survey (USGS)
An Israeli researcher has discovered that first-time smells leave an indelible mark on the brain. We all know that a smell can instantly transport us back to childhood or evoke pleasant or unpleasant memories, but now researchers in Israel believe they have found the reason why. It’s a discovery that could one day be put to use to help people to overcome psycho traumas. In an experiment, scientists from the Weizmann Institute of Science in Rehovot led by graduate student Yaara Yeshurun tested a number of volunteers to see if their initial association of a smell with an experience would leave a unique and lasting impression on the brain. In a special smell laboratory, volunteers were shown images of 60 visual objects – including chairs and pencils (objects that are not normally associated with any smells), and at the same time were presented with either a pleasant or an unpleasant odor including pear, fungus, and dead fish, generated by a machine called an olfactometer. The subjects were then placed in a functional magnetic resonance imaging (fMRI) scanner and asked to recall the smells associated with each image. The entire test was repeated 90 minutes later with the same images but different odors. Unique activity in the brain A week later, Yeshurun and her team, Prof. Noam Sobel and Prof. Yadin Dudai, scanned the volunteers in the fMRI once more, to test which smell they most associated with each image shown. The scientists discovered that after a week, even if the volunteer recalled both odors equally well (both the pleasant and unpleasant), the first association revealed noticeably activated levels of brain activity in the hippocampus and the amygdala, areas of the brain associated with memory, learning and emotion. The effect was so clearly defined that the researchers were able to predict how well a volunteer recalled smells eight days later, based on their first fMRI scans. They also found that unpleasant odors made the biggest first impression. The experiment was also conducted using sounds rather than smell, but the scientists discovered that sounds did not arouse a similar distinctive first-time pattern of activity. Their research was published in the latest issue of Current Biology. Memory of bad smells makes evolutionary sense “For some reason, the first association with smell gets etched into memory, and this phenomenon allowed us to predict what would be remembered one week later based on brain activity alone,” says Sobel. “As far as we know, this phenomenon is unique to smell,” adds Yeshurun. “Childhood olfactory memories may be special not because childhood is special, but simply because those years may be the first time we associate something with an odor.” In her research article, Yeshurun says it makes good sense to remember unpleasant memories as a kind of evolutionary “risk management”. “[It] may represent a potential adaptive mechanism considering the potential cost of failing to learn a first negative association and the potential benefit of a malleable first positive association,” she and her colleagues write. In an article in ABC Science, Yeshurun says that any application of the findings is still far off, but suggests they could be used to help improve memories, or to help people erase early traumatic memories. “It may help us generate methods to better forget early and powerful memories, such as trauma,” she says.
What does lithium batteries micro-short circuit means? It means that: lithium batteries have micro-short-circuit phenomenon between the internal cell and the cell or within a single cell. This short circuit will not directly burn out the battery, but reduce the performance of the battery in a short period of time (weeks or months), resulting in a single cell or the whole battery pack completely unusable. Micro-short circuit is the main cause of self-discharge of lithium-ion batteries. It is mainly manifested in the following aspects: 1). The voltage of a single cell in lithium-ion batteries decreases rapidly when discharging, and rises rapidly when charging. 2). The direct manifestation is that the batteries may be completely voltage-free or unable to charge and discharge. The reasons are list as follows: 1. Dust and other impurities: In the process of making the cell, when laminating or rolling, dust or other sharp impurities adhere to the cell diaphragm and pierce the diaphragm because the air environment is not up to the standard, which causes the micro-short circuit of the cell. 2. Cell sperator dislocation: It has been found that a brand of lithium battery voltage problems, and can not charge and discharge, the reason is that Best Things can not be sister, finally, there is no way, can only dismantle the cell, in the process of the cell dismantling, found the ultimate problem is that the edge of the diaphragm shrinks, leading to direct contact between positive and negative electrodes in the cell, cell damage (such as: the separation between positive and negative materials). Membrane was originally 150 PX 2 and now reduced to 125 PX 2. After that, the positive and negative electrodes inside the cell contacted directly and the cell burned out. This is due to the errors in production design. 3. Poor quality sperator: The quality of the cell diaphragm is not up to the standard, and the battery pack is often charged and discharged with high current. The diaphragm can not withstand the huge lithium ion flow in a short period of time, which leads to local or large area damage, and directly leads to severe heating, scalding and damage of the cell. Cause of material problem: 1). Impurity of cathode material, 2) long time in the air, 3) not thorough drying caused. Usual spot welding and other welding methods will lead to micro-welding or welding bubbles, micro-welding will lead to easy to fall off and so on. Now the most scientific and effective method for battery group welding is to use ultrasonic welding technology, which will not cause micro-short-circuit problems due to positive and negative electrode welding.
Feature ArticleFinding the Calls of Whales In a Sea of Sound By Dr. Aaron N. Rice Dr. Peter J. Dugan Director of Applied Science and Engineering Dr. Christopher W. Clark Bioacoustics Research Program Cornell Laboratory of Ornithology Ithaca, New York Recent expansion of commercial activities in the ocean has brought with it a major increase in the levels of anthropogenic noise propagating through marine habitats. This has been in large part due to an increase in seismic exploration and noise from ship traffic. However, it is still unclear what effect these increased noise levels may have on the ocean's inhabitants. Of principle concern are the many vulnerable, threatened and endangered species of marine mammals. Under the Endangered Species Act (ESA) and Marine Mammal Protection Act (MMPA), government regulators have instituted a monitoring and mitigation requirement to accompany many shipping and construction-related activities. Through recent developments in acoustic technology—recording capabilities, duration, computational processing capabilities—it is now possible to use the same recording devices used for ESA and MMPA compliance to also monitor ambient ocean noise, marine mammals and other organisms. Passive Acoustics Whale Monitoring Acoustic communication plays an integral role in the lives of many marine mammals and fishes. Sounds are used for communicating, navigating, finding food and detecting predators. Unlike other communication modalities, acoustic communication can be observed remotely and passively, and it can be used to assess species-specific patterns of occurrence and behavior. After establishing baseline levels of bioacoustic activity in a monitoring program, marine mammal and fish vocalizations can serve as an indicator of overall environmental health or indicate organismal responses to anthropogenic noises through changes in the occurrence of sound production. The MARU is anchored a few meters above the seafloor, allowing researchers to record long-term underwater acoustic data. When these animals become stressed, their acoustic behavior is either altered or inhibited, and sound patterns recorded deviate from control periods. Thus, passive acoustics can serve as a long-term method to record temporal or spatial changes in marine animal occurrence, behavior and ecology through the recording of their sounds. This is particularly true in cases where other methods of environmental monitoring are impractical due to seasonal conditions (e.g., aerial surveys in the winter) or inaccessible locations (e.g., sea ice cover in polar regions). One of the essential elements of successful passive acoustic monitoring is the collection of adequate baseline data. Many whales and fishes have seasonal periods of residency or sound production in different geographical regions. Thus, initiating a passive acoustic recording effort in an area at a certain time of year may not yield the expected results, simply because the animals are either not there or not vocalizing. To circumvent this limitation, passive acoustic recording in previously unrecorded regions should be conducted for extended lengths of time in order to capture such seasonal profiles. The development of autonomous, long-term recording technology has enabled passive acoustic monitoring in remote locations. Previously, long-term recordings were limited to ship or shore-bound facilities, but now recording units can be deployed anywhere around the world to listen for species of interest. Cornell's Bioacoustics Research Program (BRP) has developed a marine autonomous recording unit (MARU), a digital audio recorder that can be programmed to record on a desired daily schedule and be deployed for periods of weeks or months in a remote environment. The instruments within MARUs are housed in positively buoyant glass spheres and anchored on the seafloor, such that the recording unit floats a few meters above the bottom. Underwater sounds are recorded through a hydrophone mounted outside the sphere. The acoustic data generated by the MARUs are digitized and stored in binary digital audio format on an internal hard disk or flash memory. At the conclusion of the deployment, the MARU is sent an acoustic command to release itself from its anchor, and it floats to the surface for recovery. After the device is recovered, its recorded audio data are extracted, converted into multichannel sound files and stored on a server in preparation for analysis. The relatively small size and low cost of the recorders allow many units to be deployed in different geographical configurations, allowing researchers to understand spatial as well as temporal patterns of marine animal vocalizations. Time-frequency representation (spectrogram) of marine sounds, showing the temporal and spectral overlap of two different species: the North Atlantic right whale and black drum fish. Spectrograms of contact calls produced by North Atlantic right whales (indicated by white boxes). Both the calls and background noise are often variable, posing a challenge for automated detection methods. The development and capacity of acoustic recording technology has advanced dramatically over the past several decades. When recording media were limited to analog magnetic tape, recording duration was typically limited to the scale of hours. Today's digital storage media (such as hard drives and flash memory) have extended the duration of recording to the scale of many months in a single session, limited primarily by disk size and battery life. Currently, MARUs record continuously for up to 100 days at a sampling rate of 2,000 hertz to listen for baleen whale species. Approaches for Mining Data Thousands of hours of audio are recorded during passive acoustic monitoring, making it logistically impossible for any individual scientist to listen to the recordings in their entirety. A typical deployment consists of an array of many recording units: some deployments have as many as 19 units recording over a 100-day period, resulting in 1,900 days of data needing inspection. Before automated detection, humans would identify animal vocalizations using hearing-based methods: With 1,900 days of audio, human listening is not a realistic option. Consequently, a combination of computational methods has become paramount in dealing with data collection at this scale. First, these data are not represented in the acoustic domain in which they were recorded, but transformed to the visual domain for analysis, where these sounds are represented as a spectrogram, showing the relationship of frequency versus time for the signal. Using this spectrographic representation, the second step uses detection and classification methods to find sounds of interest and also uses image-processing-based approaches. Different image-based algorithms have been developed to use pattern recognition to find sounds (or images of the sound) of particular interest. While these computer-aided methods allow automatic inspection, they are not error-free, and they will never remove the human experts from the process. For the foreseeable future, computers require guidance. Therefore, humans not only confirm detections but also ground truth the recordings, which enables the computers to properly identify sounds. Auto-Detection and Classification Marine mammals produce highly variable sound features that span many orders of magnitude along the dimensions of time, frequency and amplitude. The main challenge becomes understanding and defining the acoustic parameters to guide the detection effort. The main acoustic parameters guiding the development of a detector are time and frequency ranges. Among cetaceans, these range from the long-duration, low-frequency calls of blue whales to the short-duration, high-frequency calls of dolphins. Some calls, like the upcall of the North Atlantic right whale or the song of the fin whale, are highly stereotyped and show relatively little variability. Others calls, such as the songs of humpback whales or bowhead whales, are extremely variable and show dramatic variation in patterns of frequency modulation. The acoustic feature space (i.e., time and frequency) and the level of variability of these calls often dictate the strategy used for developing an automated detection procedure. Additional challenges can occur when sounds of interest overlap in both frequency and time with other co-occurring species, such as the calls of haddock fish and minke whales. Building a signal-detection algorithm starts with understanding many aspects of the call pattern and the surrounding environment. This includes documenting the signal context, specifically sounds of interest and other calls in the geographic area that may be similar, and describing fluctuations in levels of ambient background noise that may mask sounds of interest. Signals of interest for detection are isolated, or 'clipped,' from the longer sound stream and put into a catalog consisting of many (often tens of thousands) short-duration clips for detector development. Including sounds from many individuals from different recording sites and locations helps document the variability of both the signal and the background noise. Three common stages exist for automatically identifying sounds in passive acoustic monitoring data. These main steps include energy detection (Stage 1), feature extraction (Stage 2) and classification (Stage 3). The approaches described here provide only a sample of the range of acoustic processing technology and include some of the methods currently used at BRP. Stage 1 is essentially an initial screening method to identify potential sounds of interest. Energy detection methods use a set of criteria combined with a threshold amplitude level: Sound above the threshold is recorded, other sound is rejected. Criteria include a time-frequency threshold, in which a simple range of frequencies and time bounds are established; connected-region detection, which detects the number of connected pixels in the spectrographic image, helping distinguish biological sounds from random, incidental or ambient background noise; and data-template detection, in which a predefined spectrographic image template (or templates) of a sound of interest is set by the user and all candidate sounds are correlated against their 'match' to the template. In Stage 2, acoustic features are extracted from these candidate sounds. For algorithms developed for right whale detection, a total of 11 features are used, including duration, frequency range, bandwidth, rate of change, etc. The goal of feature extraction is to quantitatively describe the sounds of interest using a multitude of different parameters and to use these parameters to filter the candidate sounds for sounds of interest. Stage 3 is where the quantitatively described signal is classified into either signal or noise. In classification procedures, the computer essentially is taught what the sounds of interest are, then informs the user whether candidate sounds from the recording match the sounds of interest. Three approaches used are linear discriminant analysis—a linear combination of features to classify the signal; classification and regression trees—a set of rules to make a set of tree-based decisions to classify the sound; and artificial neural networks—a series of interconnected nodes (mimicking the pattern of biological neurons) that function as a series of adaptive filters, processing the information as it flows through the system. In multiclassifier approaches, two or more of the above procedures are combined, and final classification of a sound occurs when two or more classifiers are in agreement about the nature of a signal. The combination of using all three stages to characterize different sounds in automated sound detection provides a more robust (though still not error-free) method of analysis to categorize vast amounts of data. Thus, for large-scale monitoring efforts, acoustic data can be processed in fractions of the time compared to the rate at which a human analyst can examine the data. As both the sound recording and computational technologies improve, the scope and scale of the application of these methods both broadens and improves. In an ideal setting, the relationship between the biology and the technology creates a feedback loop where the biology guides the technological development, but these new technologies in turn create new questions, approaches and ideas for the biology. For the past 10 years, research at BRP has focused on passive acoustic monitoring of cetaceans, particularly observing the occurrence of the critically endangered North Atlantic right whale along its migration route in many locations in the Atlantic Ocean and researching the ecology of bowhead whales in the Arctic Ocean. Through these ongoing efforts of marine archival recordings, an initial understanding of the acoustic ecologies for a number of other cetacean and fish species has emerged. This extensive archive of hundreds of thousands of hours of recordings represents a library encompassing a wide range of acoustic habitats, and it will be used for further development of baseline data and monitoring methods in these waters for a variety of taxa. For a full list of references or more information on the technologies discussed, please contact Aaron Rice at [email protected] or Peter Dugan at [email protected]. Dr. Aaron N. Rice is the science director at Cornell's Bioacoustics Research Program, where he leads a team of researchers investigating the acoustic behavior and ecology of whales and fishes. Dr. Peter J. Dugan, director of applied science and engineering at the Bioacoustics Research Program, leads the development of detection and classification technologies used in the automated recognition of marine animal vocalizations. Dr. Christopher W. Clark is the program director of the Bioacoustics Research Program, where, for more than 25 years, he has combined the innovation, development and application of acoustic technologies with biological inquiry to promote the understanding and conservation of marine mammals through the sounds they make and their acoustic environment.
Eosinophilic Esophagitis (EoE) Eosinophilic (ee-uh-sin-uh-fil-ik) esophagitis (EoE) is a recently recognized allergic/immune condition. A person with EoE will have inflammation or swelling of the esophagus. The esophagus is the tube that sends food from the mouth to the stomach. In EoE, large numbers of white blood cells called eosinophils are found in the tissue of the esophagus. Normally there are no eosinophils in the esophagus. EoE can occur at any age and most commonly occurs in Caucasian males. The symptoms of EoE vary with age. In infants and toddlers, you may notice that they refuse their food or are not growing properly. School-age children often have recurring abdominal pain, trouble swallowing or vomiting. Teenagers and adults most often have difficulty swallowing. The esophagus can narrow to the point that food gets stuck. This is called food impaction and is a medical emergency. Allergists and gastroenterologists are seeing many more patients with EoE. This is due to an increase in the frequency of EoE and greater physician awareness. EoE is considered to be a chronic condition. Other diseases can also result in eosinophils in the esophagus. One example is acid reflux. Diagnosing Eosinophilic Esophagitis Currently the only way to diagnose EoE is with an endoscopy and biopsy of the esophagus. An endoscopy is a medical procedure that lets your doctor see what is happening in your esophagus. During a biopsy, tissue samples will be taken and analyzed. The EoE diagnosis is made by both a gastroenterologist and pathologist. There are certain criteria for diagnosing EoE that are followed by gastroenterologists, pathologists and allergists. These include a history consistent with EoE, a visual look at the esophagus during the endoscopy procedure and careful evaluation of tissues taken from the esophagus by a pathologist. Eosinophilic Esophagitis and Allergies The majority of patients with EoE are atopic. An atopic person is someone who has a family history of allergies or asthma and symptoms of one or more allergic disorders. These include asthma, allergic rhinitis, atopic dermatitis and food allergy. EoE has also been shown to occur in other family members. After the diagnosis of EoE has been made by a gastroenterologist, it is important to have allergy testing. It will provide you, your family and the gastroenterologist with information so that any allergic aspects of EoE can be properly treated. It will also help plan diet therapy and eventual reintroduction of foods to your diet. Eosinophilic Esophagitis: Environmental Allergies Environmental allergies to substances such as dust mites, animals, pollen and molds can play a role in EoE. For some patients, it may seem like their EoE is worse during pollen seasons. Allergy testing for these common environmental allergies is often part of the EoE evaluation. Eosinophilic Esophagitis: Food Allergies Adverse immune responses to food are the main cause of EoE in a large number of patients. Allergists are experts in evaluating and treating EoE related to food allergies. However the relationship between food allergy and EoE is complex. In many types of food allergy, the triggers are easily diagnosed by a history of a severe allergic reaction like hives after ingestion of the food. In EoE, it is more difficult to establish the role of foods since the reactions are slower and so a single food is hard to pinpoint as the trigger. Allergists may do a series of different allergy tests to identify the foods causing EoE. Foods such as dairy products, egg, soy and wheat are main causes of EoE. However allergies to these foods often cannot be easily proven by conventional allergy tests (skin tests, patch tests or blood tests). Once a food has been removed from a person’s diet, symptoms generally improve in a few weeks. Eosinophilic Esophagitis: Prick Skin Testing People who have allergies react to a particular substance in the environment or their diet. Any substance that can trigger an allergic reaction is called an allergen. Prick skin testing introduces a small amount of allergen into the skin by making a small puncture with a prick device that has a drop of allergen. Foods used in allergy testing sometimes come from commercial companies. Occasionally foods for skin prick testing are prepared fresh in the allergist’s office or supplied by the family. Allergy skin testing provides the allergist with specific information on what you are and are not allergic to. Patients with allergies have an allergic antibody called Immunoglobulin E (IgE). Patients with IgE for the particular allergen put in their skin will have an area of swelling and redness where the skin prick test was done. It takes about 15 minutes for you to see what happens from the test. Eosinophilic Esophagitis: Blood Tests Sometimes an allergist may do a blood test (called a serum specific immune assay) to see if you have allergies. This test can be helpful in certain conditions linked to food allergies. The results of blood tests are not considered as helpful as skin prick testing in EoE and are not recommended for the routine evaluation of food allergy in EoE. Eosinophilic Esophagitis: Food Patch Tests Eliminating foods based on prick skin testing alone does not always control EoE. Food patch testing is another type of allergy test that can be useful in diagnosing EoE. This test is used to determine if the patient has delayed reactions to a food. The patch test is done by placing a small amount of a fresh food in a small aluminum chamber called a Finn chamber. The Finn chamber is then taped on the person’s back. The food in the chamber stays in contact with the skin for 48 hours. It is then removed and the allergist reads the results at 72 hours. Areas of skin that came in contact with the food and have become inflamed point to a positive delayed reaction to the food. The results from the food patch test will help your doctor see if there are foods you should avoid. Eosinophilic Esophagitis: Treatment Food Testing Directed Diets If you are diagnosed with specific food allergies after prick skin testing and patch testing, your doctor may remove specific foods from your diet. In some individuals this helps control their EoE. Empiric Elimination Diets Eliminating major food allergens from the diet before any food allergy testing is also an accepted treatment of EoE. The foods excluded usually include dairy, egg, wheat, soy, peanut, tree nuts and fish. These diets have been shown to be very helpful in treating EoE, although they can be very difficult to follow. Foods are typically added back one at a time with follow up endoscopies to make sure that EoE remains in control. In this diet, all sources of protein are removed from the diet. The patient receives their nutrition from an amino acid formula as well as simple sugars and oils. All other food is removed from the diet. A feeding tube may be needed since many people do not like the taste of this formula. This approach is generally reserved for children with multiple food allergies who have not responded to other forms of treatment. No medications are currently approved to treat EoE. However, medications have been shown to reduce the number of eosinophils in the esophagus and improve symptoms. Glucocorticosteroids, which control inflammation, are the most helpful medications for treating EoE. Swallowing small doses of corticosteroids is the most common treatment. Different forms of swallowed corticosteroids are available. At first, higher doses may be needed to control the inflammation but they are linked with a greater risk of side effects. Proton pump inhibitors, which control the amount of acid produced, have also been used to help diagnose and treat EoE. Some patients respond well to proton pump inhibitors and have a large decrease in the number of eosinophils and inflammation when a follow up endoscopy and biopsy is done. However, proton pump inhibitors can also improve EoE symptoms without making the inflammation any better. Researchers are now looking into using them to manage EoE. Careful monitoring by a physician knowledgeable in treating EoE is very important. New types of treatment which could greatly help patients are being studied. Working with Your Doctors EoE is a complex disorder. It’s important for patients to listen to their gastroenterologist for advice on managing EoE and figuring out when endoscopies are needed to check to see if the condition is getting better or worse. Patients also need to work closely with their allergist / immunologist to find out if allergies are playing a role. An allergist / immunologist will also be able to tell if you need to avoid any foods and can help you manage related problems like asthma and allergic rhinitis. If you are following a diet to treat your EoE, it’s often recommended to visit a dietitian. It’s important to have cooperation among physicians and families. When you first find out you have EoE, it can be overwhelming. Families often benefit from participating in support groups and organizations. Visit APFED and CURED. These are two lay organizations that have ongoing relationships with the AAAAI. Your allergist / immunologist can give you more information on EoE, allergy testing and treatment. 2015 Non-CME Recordings » Study finds increased prevalence of Eosinophilic Esophagitis in racial minorities » IgE ab to minor milk proteins may identify the proteins that are relevant to eosinophilic esophagitis » Symptoms gauge pathogenic features of eosinophilic esophagitis in children » Environment is key to understanding eosinophilic esophagitis » Nutritional Impact of Dietary Therapy in Pediatric Eosinophilic Esophagitis » Optimizing empiric elimination diets for eosinophilic esophagitis in adults » Eosinophilic Esophagitis in black children: An overlooked population? » Can patients with cow’s milk-mediated eosinophilic esophagitis tolerate baked milk? » Eosinophils and IL-9 promote mast cell accumulation in pediatric eosinophilic esophagitis » A new syndrome: Eosinophilic esophagitis with connective tissue disorder (EoE-CTD) » Food avoidance and prolonged drug-free remission in adult eosinophilic esophagitis » Blood test identifies asthma patients with increased airway eosinophils » Skin testing has limited usefulness to plan dietary therapy in adult eosinophilic esophagitis » Can we determine appropriate foods to eliminate to treat Eosinophilic Esophagitis? » Approach to dietary therapy in patients with eosinophilic esophagitis who demonstrate positive specific IgE tests to foods » Keep pace with the latest information and connect with others. Join us on Facebook and Twitter.
To some extent, Japanese Buddhism can be thought of as a series of imports from China. Over the centuries, starting as early as 500 C.E., both lay devotees and monks traveled to the mainland, bringing back with them layer after layer of Buddhist teachings and practices along with other Chinese cultural traditions. At the same time however, as the religion developed in Japan, it often did so along paths not followed on the mainland. The official story of the arrival of Buddhism to Japan states that a political delegation arrived from Korea in 538 C.E.. Among the gifts it brought for the Emperor were a bronze Buddha image, some sutras, a few religious objects and a letter warmly praising the most excellent Dharma. After initial opposition, the gifts were accepted, and a temple was built to house the objects. However, an epidemic which ravaged the land was interpreted as bringing the wrath of the indigenous kami (Japanese Shinto deities) down on the nation. This led to the objects being thrown into a canal and the temple being destroyed. Nevertheless, during the course of the next half century, Japan witnessed the firm establishment of Buddhism as a religion officially recognized and actively supported by the imperial court, thus overcoming doubts about its efficacy as a means of preventing disease, and also overcoming the fear of the national kami. In these early days, the most important aspect with regard to the flow of Chinese culture into Japan was the introduction of the Chinese script. This provided the means for the Japanese (who did not possess an indigenous writing system of their own) to assimilate the vast tradition of Chinese classics, and the Chinese version of the Buddhist canon. Only very few imported Chinese texts were translated into Japanese; most have continued throughout their history to be used in their original version. The main three characteristics of the arrival of Buddhism in Japan are as follows. Firstly, it did not come to Japan on a popular level, but was only accepted by the imperial court and then disseminated in the country from the top. Often, Buddhist faith in Japan is connected with absolute devotion to a leader with emphasis on veneration of the founders of sects, and the majority of sects keep close relations to the central governmental authority of their times. Secondly, Buddhism was often associated with magic powers, and was used by the court as a means of preventing or curing disease, bringing rain and abundant crops etc. Thirdly, Buddhism did not replace the indigenous kami, but always recognized their existence and power. This led to numerous varieties of Shinto-Buddhist amalgamation, in which often the kami were considered manifestations of the Buddhas. This is typical of how Buddhism favours harmonious coexistence with indigenous beliefs, and it was to be a similar story when Buddhism subjugated local gods and spirits in Tibet a few centuries later. During the course of the development of Buddhism in Japan, the prevailing tendency is to search for fulfillment and ultimate truth, not in any transcendental sphere, but within the structure of secular life, neither denying nor repressing man’s natural feelings, desires or customs. This perhaps explains why many Japanese arts and skills are pervaded by Buddhist spirituality. Well known examples being the tea ceremony, the arts of gardening, calligraphy and the No play. The initial period saw the introduction onto Japanese soil of the six great Chinese schools, including the Hua-Yen and Lu, that became respectively the Kegon and Ritsu in Japanese. In terms of geography, the six sects were centered around the capital city of Nara, where great temples such as the Todaiji and Hokkeji were erected. However, the Buddhism of this early period – later known as the Nara period – was not a practical religion, being more the domain of learned preists whose official function was to pray for the peace and prosperity of the state and imperial house. This kind of Buddhism had little to offer the illiterate and uneducated masses, and led to the growth of “people’s priests” who were not ordained and had no formal Buddhist training. Their practice was a combination of Buddhist and Taoist elements, and the incorporation of shamanistic features of the indigenous religion. These figures became immensely popular, and were a source of criticism towards the sophisticated academic and bureaucratic Buddhism of the capital. Heian Period (794-1185) In 794, the imperial palace of Japan moved to Kyoto, and it is from this date that important changes and developments take place which result in the emergence of a more characteristically Japanese form of Buddhism. Two schools – the Tendai and the Shingon – particularly came to the fore, in time supplanting the other established schools, and laying the foundations for future developments. Two monks, Saicho (767 – 822) and Kukai (774 – 835), effected this change which so decisively affected the future of Japanese Buddhism. By their comprehensive syntheses of the Chinese doctrine, two systems of teaching and practice were created, which effectively furnished all the essentials for the entire further development of Japanese Buddhism . Saicho, the founding father of the Tendai school, entered the sangha at an early age. After years of study and practice, he became especially partial to the teachings of the Chinese grand master Chih-I and the T’ien-t’ai School, which were based on the Lotus Sutra. In 804, he went to China, and returned with an improved knowledge of various teachings and practices, along with many sutras. He established his base on Mount Hiei, and received permission to ordain two novices every year. Official recognition of his Tendai sect soon followed, and it became one of the two dominating schools of Japanese Buddhism during the Heian period. The teachings of Chih-I form a far-reaching synthesis of Buddhist tradition inspired by the Lotus Sutra, and Saicho was to add three further elements: the practice of Chinese Ch’an; the commandments of the Mahayana which are based in essentials on the Bonmokyo, and parts of the esoteric teaching of the “True Word”, Chen-yen (Shingon in Japanese). All this helped to make a decisive step away from the academic Buddhism of the early period, to a revived active kind of religion based on belief. An essential element in the doctrine of the Tendai was the teaching in the Lotus Sutra that the possibility of salvation is given to all. Kukai and his secret doctrine, known as the True Word, Shingon, had a mysterious radiance, which encouraged the formation of legends about him. During his early studies in Buddhism, Taoism and Confucianism he came to know one of the principal texts of the esoteric canon, the Mahavairocana Sutra, but did not reach a deeper understanding of it. In 804, he traveled to China where all his doubts and questions with regard to the sutra were resolved; he returned to Japan with many new skills and instructions to impart. He founded his headquarters on Mount Koya on the Kii peninsular. His career was successful to the extent that he was allowed to build a Shingon temple in the emperor’s palace, where he performed esoteric rituals and ceremonies. In 835, Kukai sitting in deep meditation fell into complete silence. In the eyes of his devotees he is not dead, but still sits in timeless meditation on Mount Koya. Esoteric practices were very influential to the point that they dominated the Heian period, and had a decisive influence on the subsequent Kamakura period. Even the more philosophical Tendai school adopted esoteric rituals in order to make it more popular with the general population, whilst figures such as Kukai succeeded by means of esoteric rites in making rain after a time of drought, giving Buddhist esotericism a magical attraction. Towards the end of the Heian, the dissemination of more popular devotional forms of Buddhism began, which were mainly derived from the Pure Land cult of Amitabha (Amida in Japanese). This was connected with the somewhat pessimistic philosophy of a deteriorating “final period of the dharma”, which became widespread during this time. The devotional cults basically propounded the notion that salvation was only possible through the intercession of buddhas and bodhisattvas, for example through the recitation and repetition of simple formula such as the Namu-Amida-butsu (the Nembutsu – “thinking on the Buddha”). There were other faith-based doctrines during this time, the most noteworthy being the belief in the bodhisattva Jizo, who dispenses help to beings on all levels of existence and it is still alive today. Kamakura Period (1185-1333) From the end of the 11th century, a new military aristocracy in the provinces increasingly evaded the control of the central government, culminating in war between the Taira and Minamoto families. The latter were victorious and thereby acquired absolute power of the country, setting up a military government in Kamakura in the vicinity of present-day Tokyo. Minamoto-no Yoritomo received the title of Shogun with supreme military and police power, thus transferring rule from the court aristocracy to those of the warrior class (samurai). Inevitably, this was to change the whole cultural climate. This new climate did not favour the study of abstruse philosophy or the performance of elaborate rituals, so more robust and generally accessible teachings became the order of the day. The Tendai and Shingon schools declined, and more earthy democratic movements such as Zen and the devotional schools advanced. The first of the three great traditions of Kamakura Buddhism, the doctrine of the Pure Land, continued the development which had begun in the Heian period. There was the founding of an independent Japanese sect of the Pure Land known as Jodo-shu by Genku (1133-1212), better known as Honen. He decided that Enlightenment was no longer achievable by the strength man alone, and that the only possible way was to surrender to Buddha Amida and rebirth into the Western Paradise Pure Land. New in Honen’s philosophy was that, while he recognized the scholastic apparatus of Mahayana philosophy, he concentrated on an intensified religious feeling which found expression in the simple invocation of the name Namu-Amida-Butsu, stamped by unshakeable faith in rebirth into Amida’s paradise. Honen’s successor, Shinran-Shonin (1173-1262) founded the True Sect of the Pure Land, Jodo-shinshu, which is the largest Buddhist sect in Japan today. In his chief work written in 1224, he explains that the doctrine, practice, belief and realization are all given by Amida Buddha and that nothing depends on man’s “own power” (jiriki). Instead, everything depends on the “power of the other” (tariki), namely that of the Buddha Amida. Shinran emphasized that the recitation of the Namu-Amida-butsu was simply the expression of thankfu joy for having received everything from Amida. It is worth noting that Shinran was a monk who decided to take a wife, with which he had five children, and thus he symbolizes a decisive turn in Japan towards lay Buddhism. He stressed that obedience to the Buddhist commandments and the performance of good deeds were not necessary to obtain deliverance; in fact it is precisely the bad man who can be assured of rebirth in Amida’s paradise if he wholeheartedly appeals to Amida. While belief in Amida proceeds from the “strength of the other” (tariki), Zen Buddhism teaches that man can come to deliverance and Enlightenment only from his own strength (jiriki). Zen (Chinese Ch’an, from Pali, jhana and Sanskrit, dhyana) places supreme emphasis on self-power: on the active mobilization of all one’s energies towards the realization of the ideal of enlightenment. There had been contacts between Japan and Zen doctrine since the 7th century. However, a lasting tradition that concentrated on Zen practice and led to the formation of a separate sect, was first created by the Tendai monk Eisai (1141-1215). During his studies in China, he had been introduced to the practice and doctrine of a branch of Zen which went back to Lin-chi (called Rinzai in Japanese), and on his return to Japan he started to disseminate the new doctrine. Eisai established firm relations with the new military government in Kamakura and the military caste that held sway there. They found the simple, hard and manly discipline of Zen more to their taste than the ritual and dogma of the old schools. In contrast to this, Zen Buddhism was greeted with less enthusiasm by the intellectual elite of cities such as Kyoto. There, established practice was represented by Tendai, Shingon and Pure Land with their beautiful rituals. The fierce demands of Zen, with its emphasis on personal effort and the promise of enlightenment rather than heaven, seemed rebarbative and disturbing to the elite. Eisai is also linked to the introduction of tea drinking in Japan, which in time was to lead to the creation of the “tea-way” which, though non-religious, was strongly influenced by the spirit of Zen and the Tea Ceremony. In general, the monks involved in the transmission of Zen from China to Japan also transmitted Neo-Confucian values and ideas, which were themselves strongly influenced by Ch’an Buddhism and Hua-yen philosophy. The Zen masters added a Confucian moral to Buddhist spirituality, which appealed to the new warrior-class of the Kamakura. For many centuries, the big Rinzai temples in Japan were centres of Chinese learning in general, and Neo-Confucianism in particular. Furthermore, the Rinzai school is closely associated with Japanese arts and the “ways” – the aforementioned "tea way", the "flower way", the "way of archery" and others. A second Chinese school of Zen, the Ts’ao-tung (Soto in Japanese), introduced to Japan by Dogen (1200-1253). After four years of training in China under Master Ju-ching, Dogen returned to Japan in 1227, and eventually established the Eihei-ji temple in a remote province, which to this day remains one of the two main temples of the Japanese Soto Zen school. The foundation of Dogen’s Zen is the constantly emphasized principle that practice does not lead to Enlightenment, but is carried out in the state of being Enlightened; otherwise it is not practice. In a logically constructed picture of the world, he equates all being – the believer, his practice and the world – with the present moment, the moment of enlightenment. Striving for enlightenment would therefore be going astray. Dogen’s chief work was the Shobogenzo (The Eye and Treasury of the True Dharma). After Pure Land and Zen, the final great reformer and sect-founder of the Kamakura period was Nichiren (1222-82). After studying in Kamakura and training in Tendai doctrine and practice, he came to the conclusion that the highest, all-embracing truth lay in the Lotus Sutra, known in Japan as the Myoho-renge-kyo, the fundamental canonical text of the Tendai sect. However, Nichiren thought that for the simple ordinary person, Tendai dogma and the reading of the Lotus Sutra were too difficult. He proclaimed that the title, Myoho-renge-kyo, was the essence of the whole sutra, and that it was in fact identical with the state of Enlightenment of Shakyamuni Buddha. It was therefore sufficient to utter the title and find oneself in the state of highest enlightenment. This condition gave rise spontaneously to morally right behaviour, so that it was necessary for the state and society that all should follow the practice of the “invocation of the title.” Two issues isolated Nichiren: the militant style of his presentation, and his insistence that the Lotus Sutra should inform the practice of government. He constantly made his views public, and the hot worded language which he used spared neither secular or Buddhist establishments, and led to his eventual banishment to the island of Izu. He was soon pardoned, but his continued attacks on institutions so provoked government and clergy that he was sentenced to be executed. According to legend, the axe which was raised to behead him was struck by lightning. Off the hook, he again went into exile and further developed his writings. When he finally returned to the mainland, he devoted himself to his missionary activity and to the training of monks on Mount Minobu, until today the main temple of the Nichiren sect. In recent times, certain branches of Nichiren have been connected to nationalistic tendencies within Japan. The demise of the Kamakura regime inaugurated a new era of internal strife and fighting in Japan, which was to last into the seventeenth century. It also signaled the end of the truly creative phase of Japanese Buddhism. A slide into stagnation occurred, which was to broadly last until the end of the nineteenth century. According to the twentieth century Zen writer D.T. Suzuki, after the Kamakura period “what followed was more or less the filling-in and working out of details.” In the 14th and 15th centuries, the privileged relations of the Rinzai Zen sect with the military government permitted it to gain tremendous wealth. This led to the creation of what is known as the “Culture of the Five Mountains” which constitutes the summit of Japanese Zen culture. It included all the arts, such as architecture, painting, calligraphy and sculpture, as well as printing, gardening and medicine. Ikkyu (1394-1481), a priest of the Rinzai sect, was particularly known for his unconventional character, and he was an accomplished poet, calligrapher and painter. The Tokugawa Shogunate was to rule Japan from its bastion in Edo (Tokyo) for over two and a half centuries. It was to be the longest period of peace, and for the most part, prosperity in the history of the country. This was basically achieved by closing the country to the outside world, and establishing a regime of inflexible authoritarian control that created stability and order, but stifled all creative change and innovation. The Buddhist clergy was under the strict control of the government, and it was forbidden to found a new sect or build a new temple without special permission. The Shogunate encouraged the Buddhist clergy of the sects in scholarly pursuits, hoping thereby to divert them from politics. Therefore a huge amount of learned literature was produced, and by the second half of the seventeenth century, editions of the Buddhist canon appeared, the most influential being that by Tetsugen of the new Obaku-shu sect. Obaku-shu had been founded by the Chinese master Yin-yuan Lung-ch’i, a Rinzai Zen priest. It added a new flavour to Japanese Zen, not only by its syncretism (it contained elements of Pure Land Buddhism), but also by the introduction of rituals, customs and a new architectural style imported from Ming China. From the Zen school during this period, a few influential figures did emerge, the poet Basho and the Rinzai Zen masters Bankei and Hakuin being chief among them. Matsuo Basho (1644-94) was a poet who consciously transformed the practice of poetry into an authentic religious way; many of his finest poems (seventeen syllable haiku form) are thought to succinctly catch the elusive, often melancholy magic of the passing moment, and thereby express the true spirit of Zen. Bankei (1622-93) was an iconoclast who challenged orthodox Zen teaching. He spent many years intensively pursuing Enlightenment, and then at last he realized that he had been in possession of what he had been seeking all along, and decided that the term Un-born best described it. He thereafter advocated that people simply awaken to the unborn in the midst of everyday affairs, and he won himself a large audience which did not go down well with the Zen establishment. Hakuin (1685-1768) is considered to be the restorer of the Rinzai sect in modern times. He revived the use of the koan, statements of Zen masters that are used as problems set to novices in Zen monasteries. They cannot be solved by rational thinking, and are designed to help open the mind to Enlightenment. Hakuin invented many new koans himself, adapted to the need of the times, in that they do not presuppose any scholarly knowledge of the Chinese Zen classics. His most famous koan being, “The sound produced by the clapping of two hands is easy to perceive, but what is the sound produced by one hand only?” The restoration of the imperial regime in 1868 signaled the end of Japanese isolation. The pressure on Japan to reopen her doors simply becoming too great. There followed a temporary persecution of Buddhism when Shinto was made a state cult, however Buddhism was too firmly established in the affections of the Japanese people for this to last for long, and its religious freedom was effectively soon regained. For the first time in centuries, contact was made with other Buddhist countries, along with Western ones as well, and this served to encourage Buddhist scholarship, and various Buddhist universities were established by the first half of the twentieth century. During the last 50 years, the evolution of Buddhism has been closely linked to Japan’s history. The grip of the government during the Second World War over Buddhist institutions was rigid, and any writings in which Buddhism was placed above the authority of the state or the emperor were suppressed. The only opposition to this came from the Soka-gakkai, founded in 1930 as a non-religious society of teachers, and they were severely persecuted. Since the end of the war, Buddhism in Japan has once again revived, and there has been the foundation of many new sects, along with an ongoing reinvigoration as a result from sustained contacts with other peoples and cultures. Japanese Zen has also been successfully exported to many Western countries, in particular North America. From Wisdom Books (slightly edited by the webmaster) Do check out this Wikipedia page for more information, and this Budda Net page gives a good timeline of Japanese Buddhism..
- Get Involved AGRICULTURAL RESEARCH. Agricultural research has generally had a regional focus, and Texas research has been no exception. However, the state's research also has had lasting national and international implications since its formal organization in the late 1800s. In its second century, Texas research has continued to increase its global focus and impact and to aid economic growth. Vast amounts of land, variable fertility, and short water supplies helped define Texas research, as did the national research model that gave a central role to the land-grant college and the state agricultural experiment stations. These stations were based on practices already developed before the mid-1800s in Europe, where systematic, scientific methods were used to improve the process of agricultural observation and selection. Those practices led to improved yields and less labor devoted to food production-developments that had long marked the progress of various cultures from subsistence economies to more stable and varied social systems. Such progress marked the period in which the United States established its agricultural research system. The Industrial Revolution was expanding and requiring more labor; the American West was being settled, cities were growing, and trade, especially for manufactured goods, was increasing. In 1887, hoping to improve agricultural efficiency, the Congress approved the Hatch Act, which established federally supported experiment stations as components of state land-grant colleges (themselves established by the Morrill Act of 1862). The colleges often possessed the farmlands, laboratories, and faculty needed by the stations, and students would benefit from the close working relationship between state colleges and experiment stations. The structure of these stations is fairly uniform throughout the United States. As in Texas, they usually conduct a three-pronged effort involving education, research, and extension carried out by a land-grant college or college system, a state experiment station (often with several research sites), and a state extension service. Most state experiment station scientists traditionally spend some time teaching in the colleges. The Texas Agricultural Experiment Station was established on April 2, 1887, when Governor Lawrence Sullivan Ross signed legislation establishing the station and designating Texas A&M as administrator of its program. This was during the midst of a depression in American agriculture, when rapid plowing of the Great Plains, increased farm productivity, and steady appreciation of the dollar brought low prices for farm commodities. An advisory body of Texas farmers and ranchers went to College Station to confer with the station on agricultural research. Together they decided to focus initial research on seven projects with a practical bent: improving feeding methods for beef and dairy cattle, finding the best-adapted fruit varieties for Texas, studying the adaptability and feeding value of various grasses and forage plants, comparing the usefulness of barnyard manure and commercial fertilizers, determining the value of tile drainage for gardens and farms, controlling cotton blight (or root rot), and protecting cattle from Texas fever. Within a few years of its establishment, the station had successfully met major research goals in six of the seven projects. Texas fever, once a nationwide problem, had been effectively wiped out, largely through TAES efforts in cooperation with others. Root rot, however, remained a major problem, even though its effects can be somewhat controlled through various cultural practices. The remaining five initial projects became successful bases for later work in the areas of irrigation, fertilization, livestock feeding, new or improved crops, and range management. Early field trials were conducted at the TAES College Station headquarters, but researchers suspected that many findings were dependent on local conditions. By 1880 several state prison farms also were used for trials, as were fields at Prairie View Normal College. Occasionally, private farms and ranches cooperated with TAES by providing funding, livestock and their own facilities for research. The King Ranch in South Texas, for example, was a partner in the successful effort to eradicate Texas fever. This beef-cattle disease caused many northern states to ban imports of Texas cattle from the 1870s to the 1890s. After TAES scientists confirmed it was carried by ticks, joint development of a method of dipping cattle into vats of sheep dip and other fatty solutions to kill the ticks helped defeat the disease. By 1900, growing demands for agricultural products in a rapidly increasing urban population led to higher commodity prices, and agricultural research and education had also improved and grown dramatically across the United States. TAES had begun expanding its efforts and throughout the next century it was able to improve productivity of all the state's major crops and livestock. For cotton, since the 1800s the top cash crop in Texas, TAES worked extensively to breed plants that fruited early and more rapidly as a method of defeating the boll weevil. One of the first of quick-growing varieties was jointly bred by TAES and USDA in the first years of the 1900s. TAES also introduced TAMCOT, some of the first varieties suited to harsh conditions on the High Plains, and has introduced many new varieties with superior fiber strength. The station did extensive work with mechanical strippers during the first decades of the 1900s, and in 1971 developed a cotton module system that compresses cotton directly to compact field-storage units of ten to fifteen bales, making it easier for farmers to store and transport their cotton. The project was carried out cooperatively with cotton farmers and supported financially by the trade association, Cotton, Incorporated, through producer check-off funds collected on each harvested bale. Wheat, sorghum, corn, and rice are also among the biggest cash crops in Texas, and new lines of each have been developed by the station. The first successful semi-dwarf hard red winter wheat varieties led to higher yields and insect resistance and ability to produce crops late in the year; by 1990 improved varieties were grown on about half of the state's wheat acreage and almost a quarter of all wheat acreage in Texas, Oklahoma, Nebraska, Colorado, and Kansas. The station developed sorghum hybrids in the 1940s and 1950s that more than tripled sorghum yields by the 1980s and also produced more effective corn hybrids. An example of successful state and federal cooperation resulted from requests in the 1970s by rice producers, who helped fund an aggressive research agenda from TAES and USDA that began in 1982. By 1986, the Texas Econo-Rice initiative had boosted production by some 2,000 pounds per acre, to a state average of about 6,300 pounds, and cut costs to $8.20 per hundredweight from some $13 per hundredweight-far exceeding program goals. In addition, new semi-dwarf varieties enabled producers to grow two crops each year. Similar successes have been achieved with smaller Texas crops. In the 1980s an investment of $10 million from both state and industry sources in onion research yielded the Texas Grano 1015Y onion, a larger, sweeter variety with such a vast market appeal that by 1991 it was worth some $150 million annually to the state's economy, including $42 million in wholesale income alone. TAES had a number of other notable achievements in both beef and dairy cattle research. Among these were the use of electrical stimulation of carcasses to improve tenderness and extensive crossbreeding and feeding studies to improve the productivity of the Texas cattle industry. Because of the high cost of irrigation in drier areas of Texas, particularly the High Plains, water conservation has always been a major area of concern for both state and federal agencies. The effects of different types of plowing, forms of rows for crops, mulches, and irrigation systems were key projects. A major TAES accomplishment was the Low Energy Precision Application irrigation system, the first mobile drip system of field size. Developed in 1976, LEPA achieved irrigation efficiencies that are expected to lighten demands on the Ogallala Aquifer in the Panhandle. TAES research is now carried on at substations throughout the state. The first permanent TAES substation was established on 151 acres near Beeville in 1894, and it was still carrying on research on forage and reproduction of beef cattle into the 1990s. Substations opened and closed over the next seven decades, and beginning in the 1960s, several units were converted to regional research and extension centers. This was partly an effort to improve communication among scientists, extension specialists, and farmers and also to bring together larger groups of researchers for more complex, interdisciplinary projects. There are now fourteen such regional research and extension centers, shared by TAES and the Texas Agricultural Extension Service (both agencies of the Texas A&M University System). Additional TAES research facilities with more limited functions are located in ten other communities across Texas. TAES substations in Weslaco, Amarillo, Beaumont, and Temple share facilities with the Agricultural Research Service of the United States Department of Agriculture, which supervises federal agricultural research and allocates research funds to state experiment stations. The oldest of the ARS centers still in use is at Big Spring, where experiments in dry-land crop rotation and tillage began in 1915. It pioneered techniques for using layers of cotton-gin trash to decrease erosion on sandy soils and developed field equipment now used worldwide to measure wind erosion. An ARS facility with similar purposes was established in 1936 at Bushland, near Amarillo, and it has developed improvements in stubble-mulch tillage, water conservation, wind-erosion control, wheat improvement, grass reseeding, and livestock management, including reduction of losses to bovine respiratory disease. In 1931 the ARS established its cooperative relationship with the TAES center at Beaumont to work on rice breeding. The resulting program serves the entire United States with research on cooking and processing qualities of various lines and improved disease and insect resistance, among other projects. Similarly, the USDA in 1932 established a pecan-breeding facility in Brownwood that is the only one of its kind in the world, and added a second worksite near College Station in 1987. The pecan program has introduced nineteen improved cultivars used throughout the country. Three-fourths of the cultivars recommended for planting in Texas were introduced through these facilities. The ARS established the United States Livestock Insects Laboratory in 1946 in Kerrville to conduct research on biology and control of parasitic insects affecting livestock and human beings. Its most notable achievement was development of the process for sterilizing male screwworms, which when released into the environment overwhelmed populations of the flies. Because they breed only once, the screwworms were unable to produce their flesh-eating larvae. TAES, which had studied the problem since at least 1890, played a cooperative role in the eradication, as did producers and several other state and federal agencies. In 1982 the last case of screwworms was reported in Texas. Other accomplishments included development of a cattle-grub vaccine, ear tags that decrease environmental contamination by 98 percent, and microencapsulation techniques used for long-term pest control in livestock. The Food Animal Protection Research Laboratory in College Station, which focuses on solving problems related to food safety in livestock and poultry, has increased understanding of how natural and synthetic poisons affect livestock and poultry and improved methods for eliminating chemical and microbial hazards associated with meat products. Other ARS units in College Station include the Southern Crops Research Laboratory and the Veterinary Toxicology and Entomology Laboratory. The Grassland, Soil and Water Research Laboratory of the ARS, in Temple, has been a leader in controlling undesirable plants that compete with grasses on rangelands, developing new strains of pasture and range grasses, and pioneering efforts to develop computer simulation of agricultural processes. By the 1990s many of its projects focused on computer models and databases used for soil and water testing, geographic-information systems, and other projects based on large sets of data on soils, water, and other natural resources throughout Texas and the world. The Subtropical Agricultural Research Laboratory in Weslaco focuses on national and international agricultural needs, many centered on preventing the spread of exotic pests such as the boll weevil or the Africanized honey bee, which invaded the United States through the Rio Grande valley. The facility's research dramatically improved cotton-production efficiency in South Texas through development of short-season, early-maturing crops planted in narrower rows. Use of high-altitude infrared photography to detect citrus-blackfly infestations and discovery of methods to control tracheal mites, which harm honey bees, are also among its accomplishments. TAES and ARS also cooperate with other educational institutions and with numerous private foundations and commodity groups. Several universities have extensive agricultural education programs and have carried out successful, smaller-scale research programs, including Texas A&M University at Kingsville (wildlife), Tarleton State University (soil and water), and Prairie View A&M University. Prairie View's Cooperative Agricultural Research Center has specialized facilities for small-animal research and meat research; in addition to poultry and swine complexes and a computerized feed mill, it has greenhouses and other facilities for small-animal research on various crops. Each of these universities is part of the Texas A&M system, and agricultural research at each is supported by TAES funding. Texas Tech University, established in 1925, is not a landgrant university and therefore initially lacked the resources of Texas A&M and TAES. Its plans for agricultural research were hindered through the 1930s by funding shortages, although it cooperated with TAES on several projects. Among the earliest and most important was the work both did in livestock feeding with crops available from High Plains farms. That and other research led to development of the area's extensive feedlot industry. The university organized a farm and ranch research facility in the late 1940s under a lease agreement with TAES and the USDA. Experiments on livestock feeding and additives, digestibility of feedstuffs, and crossbreeding were of primary importance to the area, as was research involving forage sorghum varieties, fertilizers, the use of sewage effluent for irrigation, soils, herbicide tests, seeding of rangeland grasses, the effects of fire on High Plains rangeland vegetation, and the control of greenbugs and aphids. Texas Tech's research in agriculture of arid regions has drawn international attention. A private foundation that significantly added to the state's research effort was the Texas Research Foundation, which began at Southern Methodist University in 1944 as the Institute of Technology and Plant Industry. The institute was founded to solve regional problems that Dallas-area businessmen and others felt received too little attention from existing state and federal institutions. By 1945, fund-raising and demand for research projects had risen enough that the institute was separated from SMU and set up as a private foundation. Originally called the Texas State Research Foundation, it was moved from SMU to 107 acres of land at Renner, which the university deeded to the foundation in 1946. Operating independently, the foundation focused on soil and soil fertility, with an emphasis on using forage grasses and then grains to restore organic matter to the soil in cropping systems. In 1972 the foundation went out of business and turned over its research facilities and a portion of its land to TAES, which closed its Denton substation and moved into the former foundation buildings. This Dallas TAES substation became a center for urban agricultural research, including work related to the multibillion-dollar turf, landscaping, and nursery industries; biological control of insects; and management of fertilizer and other chemicals, both in agricultural production and maintenance of lawns and urban landscaping. A similar effort, the High Plains Research Foundation, was organized near Halfway by area businessmen and farmers with the help of the Texas Research Foundation in the late 1950s and turned over to TAES in 1973. The foundation's initial research laid a foundation for continuing work in soil and water research and equine and crop breeding programs. Businesses have also improved Texas agricultural production through their own research and with financing of public projects. Among the more successful research efforts is that by the Texas seed-sorghum industry, whose High Plains-based firms lead the world in providing hybrid seed. Much of their output results directly from USDA-TAES efforts to collect sorghum varieties from all over the world, which are then interbred for improved disease and insect resistance, yield, and nutritional quality. Seed producers in corn, cotton, and other crops are also among the state's leading exporters of agricultural products. By the 1980s private firms and commodity groups began playing an increasing part in funding agricultural research and in cooperative public-private efforts. This coincided with a changing role of United States experiment stations, which increasingly began emphasizing research on issues of environmental and consumer concern. By that time, the American public enjoyed the benefits of food that was both abundant and relatively cheap by world standards. However, criticism of the agricultural system increased because of problems of groundwater pollution, soil erosion, declines in rural communities as farming and ranching became more concentrated in larger operations, and other issues. By the 1990s, TAES, ARS and other public and private groups faced these issues and others, including the concerns of animal-rights groups, increasing pressure from regulatory agencies, and increased research costs. The research agendas of these groups reflect those concerns, with efforts focusing on systems to take into account the complexities of modern agriculture. Integrated pest-management systems, for instance, focus on combinations of the best-known methods of chemical and biological control, with decreasing emphasis on chemicals, that would allow high output with low production and environmental costs. Interdisciplinary research, biotechnological methods, sophisticated electronics, and other techniques have become standard approaches in the research effort. Still, agricultural research in Texas has, in some ways, changed little. Though root rot still affects cotton and many other plants throughout Texas, interdisciplinary efforts to understand the fungus causing the disease may lead to more effective methods of controlling it. Research into genetic material that increases yields, resistance to disease, or tolerance of drought in various crops could produce new genetically engineered crops of all types; similar research for livestock may produce cattle that provide leaner, healthier beef that still has the flavor and texture consumers previously associated only with beef containing more fat. Scientists in Texas and throughout the United States are continuing to seek new ways to produce, process, package, and distribute foods that are lighter, fresher, faster to prepare and more healthful. Computer-based information and decision systems are becoming more important in such operations as pest management and irrigation. Computerized tractors and harvesting equipment may soon plant, prune, selectively harvest, and even cool and pack many crops automatically. Two research centers opened in the 1990s and affiliated with TAES and Texas A&M illustrate the use of the new approaches. The Institute of Biosciences and Technology in Houston's world-renowned medical center focuses on links between agriculture, human medicine, and veterinary medicine. Among its initial research projects were basic studies in genetic structure with applications in both human medicine and agricultural production. The Crop Biotechnology Center, located on the Texas A&M campus, was organized to bring together researchers and new technology in genetic identification, molecular biology, and applied plant breeding. Key goals in each of the new centers were to encourage further diversification and provide a competitive edge to the Texas economy. Agricultural research has helped lead to production efficiencies that gave Texas agriculture $14 billion in gross sales in 1991, when one-fifth of all Texans worked in jobs related to the production, processing, or marketing of agricultural products. New technology-intensive approaches were expected to help future Texas researchers continue to find better answers to the age-old questions about how best to feed people and livestock, but with increased emphasis on caring for the environment that supports both. See also AGRICULTURE. Henry C. Dethloff, A Centennial History of Texas A&M University, 1876–1976 (2 vols., College Station: Texas A&M University Press, 1975). Donald E. Green, Fifty Years of Service to West Texas Agriculture: A History of Texas Tech University's College of Agricultural Sciences, 1925–1975 (Lubbock: Texas Tech University Press, 1977). Don F. Hadwiger, The Politics of Agricultural Research (Lincoln: University of Nebraska Press, 1982). Robert L. Haney, Milestones: Marking Ten Decades of Research (College Station: Texas Agricultural Experiment Station, 1989). Cyrus L. Lundell, Agricultural Research at Renner (Renner, Texas: Texas Research Foundation, 1967). Clarence Ousley, History of the Agricultural and Mechanical College of Texas (College Station: A&M College of Texas, 1935). Image Use Disclaimer All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Handbook of Texas Online, Dudley T. Smith and Steve Hill, "AGRICULTURAL RESEARCH," accessed June 16, 2019, http://www.tshaonline.org/handbook/online/articles/aka01. Uploaded on June 9, 2010. Modified on September 4, 2013. Published by the Texas State Historical Association.
- Within an instance method or a constructor, this is a reference to the current object — the object whose method or constructor is being called. You can refer to any member of the current object from within an instance method or a constructor by using this. - From within a constructor, you can also use the this keyword to call another constructor in the same class. Doing so is called an explicit constructor invocation. - If present, the invocation of another constructor must be the first line in the constructor. - Used to specify methods in the parent class. It is imperative that besides your book to read also here . Exercise 1 for Monday 14/1: a. Which is the parent class of ellipse? b. Use "super" to call ("invoke") the method print from the parent class of ellipse and print the values of the fields ("instantiated variables")
From the 1950s until recently, we thought we had a clear idea of how continents form. Most people will have heard of plate tectonics: moving pieces on the surface of the planet that collide, pull away or slide past one another over millions of years to shape our world. There are two types of crust that sit on top of these plates: oceanic crust (that beneath our oceans) and continental crust (that beneath our feet). These move across the surface of the Earth at rates of up to 10cm per year. Many are in a state of constant collision with one another. Continental crust is thicker than oceanic crust. When continents collide, they buckle upwards and sideways to form mountain ranges: the Himalayas, for example. When continental and oceanic regions collide, the oceanic crust slides beneath the continent and gets consumed back into the Earth in a process that geologists call subduction. In these circumstances, the plate on top is subjected to compressing and stretching forces that can create mountain belts such as the Andes in South America. The sinking ocean plate meanwhile melts and can produce volcanoes at the surface. All of this adds new material to the continent. As the plate beneath pushes its way under the one above, large earthquakes can also be generated, like the one that struck Sumatra in 2004 and caused the Boxing Day tsunami. Rip it up and start again For 60 years the orthodoxy has been that these processes gradually form supercontinents, such as Gondwana or Laurasia, where a vast land mass is brought together before slowly breaking up and drifting away in pieces again. This has happened a number of times in cycles since the Earth was formed, collecting and then separating land over and over again. Now we have new information that suggests that the process is more complex than we had thought. When supercontinents break apart, small pieces of so-called “exotic continental crust” sometimes splinter off and get set adrift in newly formed oceanic crust (which is generated in places where continents break up). When the oceanic crust containing the remnant fragment of continental material collides with another continent, the exotic piece of crust is too thick and buoyant to take part in the usual process of subduction. Instead of sliding beneath, it gets stuck at the margin of the continent. When the surrounding zones of tectonic collision recede as the large piece of continental crust increases in size, the newly formed crust is forced to wrap itself around the exotic continental fragment. This creates a dramatic bent mountain belt called an orocline. This theory was first published by a group of Australian academics earlier this year, based on predictions from their 3D computer model. But the field evidence to support their findings was limited, so the race was on to demonstrate that this really does happen. To confuse things further, not all oroclines are necessarily formed in this way: sometimes mountain ranges can bend for other reasons. So the likes of the Texas Orocline in eastern Australia or the Cantabrian Orocline in Iberia would be good places to look for evidence of the new theory. But their existence doesn’t tell us anything by itself. Mountains below the ground This is where my team came in. I have spent the best part of 12 years driving around the outback in eastern Australia, digging holes to bury small seismic sensors. These record earthquakes from places like Indonesia, Fiji and Japan, which through a process called seismic tomography has enabled us over time to build up a 3D image of the Earth’s crust in Australia. It is similar to the X-ray-based computerised tomography (CT-scan) that doctors use to construct internal images of parts of the human body. Over the years I planted about 700 of these sensors. The sensors have now enabled us to prove that the theory is correct. Ironically we found what we were looking for, not in any of the world’s known bent mountain ranges but in one of the flattest places on Earth: the Hay plains in western New South Wales, a dry dusty expanse over hundreds of miles. Hay is the site of an old sea that formed and receded due to variations in sea level, during which sediments were deposited on the eroded bedrock below. Our imaging shows that buried underneath it are the remains of exactly the sort of orocline the theory predicted. Now for the rethink… What does this mean for geology? It shows us that continents form in more complex ways than we thought. Scientists will now probably start testing other parts of the Earth’s crust to try and find examples elsewhere, including the oroclines that we can already see. It is very hard to say how widespread these features will turn out to be. Most likely the old version of plate tectonics will still be true in the majority of cases. The discovery may give us new insights into how minerals are formed. I wouldn’t go as far as to say it will help us to find more minerals, but it should add extra sophistication to our predictive framework for saying where and how minerals form. It will also make us think more about what happens when supercontinents break apart, especially smaller pieces the size of Tasmania or the UK. It could mean that a lot of them end up forming new continents through this sort of process. Previously scientists hadn’t given this much thought. Wherever the new findings take us, it may be the beginning of a new chapter in how the world fits together.
October 2, 2003 A weekly feature provided by scientists at the Hawaiian Volcano Observatory. Continent dwellers take colorful rocks for granite A recent Hollywood disaster movie depicts a scenario in which the earth's Geologists on the Big Island know that they are among the privileged few in their profession to work around active lava flows. Yet after a few years in Hawai`i, many are overcome by a secret longing for the more varied rocks of the mainland. Memories of pink and white granite flecked with shiny mica, fossil-laden limestone, or pinnacles of red sandstone evoke nostalgia for continental rocks that cannot be eased by a hike over dark lava flows. How come the rocks of Hawaii seem so devoid of color compared with those on the continents? Let's start with some basic geology. The earth's rocks can be divided into three main categories: igneous rocks, which crystallize from magma; sedimentary rocks, which form when fragments of eroded rocks are cemented together to form a new rock, and metamorphic rocks, which form when rocks recrystallize in response to temperatures and/or pressures greater than those under which they first formed. Sedimentary and metamorphic rocks form only a small proportion of the earth's crust relative to igneous rocks, but they add a great deal of variety to continental rocks and landforms that we are missing out here in the middle of the ocean. In Hawai`i, we see almost nothing but igneous rocks. Most of those are black or gray when young or brown-to-red when old and weathered. The lava in Hawai`i owes its dark colors to a high iron and magnesium content. As it ages, oxidation of the iron creates the orange and red hues. The only white rocks we're likely to encounter on the island (aside from the white crust that sometimes forms on new lava flows due to chemical alteration by hot gas) are pieces of coral--the skeletons of reef-building organisms. Our white sand beaches are also composed of coral fragments eroded from the reef. In contrast, on the mainland, as on all continents, the earth's crust has a higher concentration of the elements that form quartz and alkali feldspar, minerals that are in short supply here in Hawai`i. These minerals account for the bulk of white and pink rocks in the world, including the igneous rock, granite. Granite forms the backbone of the Rocky Mountains and the Sierra Nevada. In fact, the average composition of the upper continental crust is granite. On the continents, nearly all the white sand beaches derive their color from quartz grains, most of which are eroded from granitic rock. The reason for the discrepancy between oceanic and continental rock types goes back to the earth's beginnings, when the chemical elements that compose our planet became layered according to density. The result was a dense core of iron and nickel, surrounded by a less dense mantle. The lighter chemical elements rose to become concentrated in the crust, the outermost and thinnest layer of the earth. The ocean basins formed as convective movement deep within the earth split the crust and initiated plate tectonics. Oceanic crust forms at plate boundaries called spreading ridges, where magma from the mantle rises to fill the gap as plates pull away from each other. Because the oceanic crust is formed from mantle material, it is denser and thinner than continental crust and consists mainly of basaltic rock. Oceanic crust may not be as pretty, but it's far more dynamic than its continental cousin. The average age of oceanic crust is about 55 million years, compared to an average age of 2.3 billion years for continental crust. This is because oceanic crust is constantly being recycled at convergent plate boundaries, where an oceanic plate meets a continent. Since the oceanic plate is the denser of the two, it sinks, or subducts, beneath the continental plate. Because of its lower density, little of the continental crust is recycled. The stable interior of the continents is very old, indeed, with rocks dating back 3.96 billion years. Of course our youngest rocks, the ones that are still flowing, are very colorful, and many visitors would argue that they beat anything the continent has to offer. But variety is the spice of any rock collection, so next time you're on the mainland, keep your eyes on the ground. You just might find a piece of continental crust worth lugging home to the middle of an oceanic plate. Eruptive activity at the Pu`u `O`o vent of Kilauea Volcano continued unabated during the past week. Although no breakouts can be found in the coastal flats, surface activity is visible on Pulama pali in both the Kohola and the August 9 segments of the Mother's Day flow. The Kohola arm of the Mother's Day flow, on the western flow field, has some incandescent patches high on Pulama pali, as well as just above Holei pali. On the east side of the flow field, the August 9 breakout continues to develop crust and so is less incandescent than it has been. Patches of incandescence track the lava from the top of Pulama pali to the gentle slope below. No lava is entering the ocean. No earthquakes were felt on the island during the past 7 days. Mauna Loa is not erupting. The summit region continues to inflate slowly. Seismic activity remains low, with only one earthquake located in the summit area during the last seven days. Visit our website (hvo.wr.usgs.gov) for daily volcano updates and nearly real-time earthquake information. Updated: October 17, 2003 (pnf)
An intuitive understanding of numbers, their magnitude, relationships, and how they are affected by operations. Development of number sense helps a child learn to solve problems conceptually rather than procedurally. Specific skills and topics related to number sense include place value, mental artithmetic, and estimation. Examples and resources - The National Council of Teachers of Mathematics provides a guide to "Understanding a Child’s Development of Number Sense", with videos of children at various stages of understanding. - See Lisa Carboni’s article "Number Sense Every Day" (available from LEARN NC) for another discussion of number sense.
Insulin resistance Syndrome is a pathological condition in which the natural hormone “insulin” becomes less effective at lowering blood glucose levels. The resulting increase in blood glucose may raise levels outside the normal range and cause adverse health effects. Insulin is used by the human body as a ligand (key) that allows the body to unlock the door for glucose to enter specific cells, such as muscles in order to produce energy. Without insulin, certain cells will not allow glucose to enter. Certain cell types such as fat and muscle cells require insulin to absorb glucose. When these cells fail to respond adequately to circulating insulin, blood glucose levels rise. The liver helps regulate glucose levels by reducing its secretion of glucose in the presence of insulin. This normal reduction in the liver’s glucose production may not occur in people with insulin resistance Insulin Resistance Syndrome Insulin resistance in muscle and fat cells reduces glucose uptake (and so local storage of glucose as glycogen and triglycerides, respectively), whereas insulin resistance in liver cells results in reduced glycogen synthesis and storage and a failure to suppress glucose production and release into the blood. Insulin resistance normally refers to reduced glucose-lowering effects of insulin. However, other functions of insulin can also be affected. For example, insulin resistance in fat cells reduces the normal effects of insulin on lipids and results in reduced uptake of circulating lipids and increased hydrolysis of stored triglycerides. Increased mobilization of stored lipids in these cells elevates free fatty acids in the blood plasma. Elevated blood fatty-acid concentrations (associated with insulin resistance and diabetes mellitus Type 2), reduced muscle glucose uptake, and increased liver glucose production all contribute to elevated blood glucose levels. High plasma levels of insulin and glucose due to insulin resistance are a major component of the metabolic syndrome. If insulin resistance exists, more insulin needs to be secreted by the pancreas. If this compensatory increase does not occur, blood glucose concentrations increase and type 2 diabetes occurs. Causes of Insulin Resistance Syndrome Although the likelihood that you will develop insulin resistance syndrome is most commonly related to your genetic and family history, the following are considered risk factors that tend to increase your risk of developing the medical condition: – Being physically inactive – Having a close relative who has diabetes – Have an indigenous background, due to the fact that your family history has not had thousands of years to grow accustomed to European food and the excessive intake of suggars. – Giving birth to a baby weighing more than 9 pounds or being diagnosed with gestational diabetes-diabetes first found during pregnancy – Co-existing medical conditions such as hypertension (high blood pressure), cardiovascular disease, polycystic overies, or unusually low High Density Lipid counts in your blood all have been associated with contributing to diabetes and insulin resistance syndrome. – Having other conditions associated with insulin resistance, such as severe obesity or acanthosis nigricans. Sources: National Diabetes Diabetes Clearinghouse (2011): Insulin Resistance and Pre-diabetes. http://diabetes.niddk.nih.gov/dm/pubs/insulinresistance/
Researchers Propose Milking Diatoms to Yield Massive Amounts of Oil or Bio-Hydrocarbon Fuels 18 June 2009 |A pennate diatom, NaVicula sp., showing an oil droplet. Click to enlarge.| Scientists in Canada and India are proposing a variety of ways of harvesting oil from diatoms—single cell algae with silica shells—using biochemical engineering and also a new solar panel approach that utilizes genomically modifiable aspects of diatom biology, offering the prospect of “milking” diatoms for sustainable energy by altering them to actively secrete oil products. Their communication appears online in the current issue of the ACS’ bi-monthly journal Industrial Engineering & Chemical Research. Richard Gordon, T. V. Ramachandra, Durga Madhab Mahapatra, and Karthick B note that some geologists believe that much of the world’s crude oil originated in diatoms, which produce an oily substance in their bodies. Barely one-third of a strand of hair in diameter, diatoms flourish in enormous numbers in oceans and other water sources. They die, drift to the seafloor, and deposit their shells and oil into the sediments. Estimates suggest that live diatoms could make 10-200 times as much oil per acre of cultivated area compared to oil seeds, Gordon says. The transparent diatom silica shell consists of a pair of frustules and a varying number of girdle bands that both protect and constrain the size of the oil droplets within, and capture the light needed for their biosynthesis. We propose three methods: (a) biochemical engineering, to extract oil from diatoms and process it into gasoline; (b) a multiscale nanostructured leaf-like panel, using live diatoms genetically engineered to secrete oil (as accomplished by mammalian milk ducts), which is then processed into gasoline; and (c) the use of such a panel with diatoms that produce gasoline directly. The latter could be thought of as a solar panel that converts photons to gasoline rather than electricity or heat.—Ramachandra et al. Noting that milk is not harvested from cows by grinding them up and extracting the milk, the researchers propose that diatoms essentially be allowed to secrete the oil at their own pace, with selective breeding and alterations of the environment maximizing production. Mammalian milk contains oil droplets that are exocytosed from the cells lining the milk ducts. It may be possible to genetically engineer diatoms so that they exocytose their oil droplets. This could lead to continuous harvesting with clean separation of the oil from the diatoms, provided by the diatoms themselves...Higher plants have oil secretion glands, and diatoms already exocytose the silica contents of the silicalemma, adhesion and motility proteins, and polysaccharides, so the concept of secretion of oil by diatoms is not far-fetched. The researchers also note that produced in the range of C7-C12 hydrocarbons, about 1/3 of tested diatoms produced α, β, γ, and δ-unsaturated aldehydes. With some optimism about the power of systems biology and how malleable microalgae might be, perhaps we could engineer diatoms that would make these compounds, or the lower-molecular-weight alkanes and alkenes, in great quantities...Given that pathways exist for the production of many alkanes, starting with 12-alkane, the production of shorter alkanes within genetically manipulated diatoms might be plausible. If not, we could fall back on known organic chemistry reactions to convert the natural products to alkanes.—Ramachandra et al. Also noting that with more than 200,000 species from which to choose, and all the combinatorics of nutrient and genome manipulation, finding or creating the “best” diatom for sustainable gasoline will be challenging, the authors offer some guidelines for starting species: Choose planktonic diatoms with positive buoyancy or at least neutral buoyancy. Choose diatoms that harbor symbiotic nitrogen-fixing cyanobacteria, which should reduce nutrient requirements. Choose diatoms that have high efficiency of photon use, perhaps from those that function at low light levels. Choose diatoms that are thermophilic, especially for solar panels subject to solar heating. Consider those genera that have been demonstrated by paleogenetics to have contributed to fossil organics. For motile or sessile pennate diatoms that adhere to surfaces, buoyancy may be much less important than survival from desiccation, which seems to induce oil production. Therefore, the reaction of these diatoms to drying is a place to start. The reaction of oceanic planktonic species to drying has not been investigated, although one would anticipate that they have no special mechanisms for addressing this (for them) unusual situation. Genetic engineering of diatoms to enhance oil production has been attempted, but it has not yet been successful. Generally, cell proliferation seems to be counterproductive to oil production on a per-cell basis, which is a problem that has been expressed as an unsolved Catch-22. However, this balance may shift in our favor when we start milking diatoms for oil instead of grinding them.—Ramachandra et al. T. V. Ramachandra, Durga Madhab Mahapatra, Karthick B and Richard Gordon (2009) Milking Diatoms for Sustainable Energy: Biochemical Engineering versus Gasoline-Secreting Diatom Solar Panels. Ind. Eng. Chem. Res., Article ASAP doi: 10.1021/ie900044j TrackBack URL for this entry: Listed below are links to weblogs that reference Researchers Propose Milking Diatoms to Yield Massive Amounts of Oil or Bio-Hydrocarbon Fuels:
Learning can be spiced up with some hands-on activities that make science exciting and can make learning much more effective. Investigatory projects, or science projects, teach people important ideas about their world and can also be a lot of fun. Read on for some investigatory project examples your kids will love! Observing a Chemical Spectrum One investigatory project example that's a complex but very impressive project is spectroanalysis. "Spectroanalysis" is a fancy word for analyzing the spectrum of an object, usually given off when the object is burned. To perform this experiment, you'll need a Bunsen burner or other heat source, some things to burn, and a diffraction grating. You can obtain these supplies from Edmonds Scientific (see the link below). As for the objects to burn, wood, salt, sugar, and various nitrate salts work magnificently. Just make sure you have a few samples of each item. Burn each chemical on a small wood stick individually and observe the color of the flame with and without the diffraction grating, which separates the flame into its component colors or spectrum. Observe that each chemical gives off a different spectrum. This spectrum can be used to identify the chemical very accurately. Each chemical emits a different spectrum when burnt. By recording this spectrum, you can identify a chemical based on how similar its spectrum is to known spectra given off by other chemicals. The Capillary Effect This is an investigatory project example that is fun and safe; it demonstrates the capillary effect, also known as capillary action. Lower a rolled-up paper towel into a glass full of water until about two centimeters of the paper towel are in the water. Observe how the water seems to flow up the paper towel, contrary to what one would expect. Eventually, the paper towel will become fully wet. This demonstrates capillary action, because the water has less of a cohesive force than that of the adhesive force between the towel and the water. Hence, the towel pulls water up, against gravity. This also works with a very narrow tube in place of a paper towel. To add some color to the experiment, try putting food dyes in the water. Also, observe what happens when you put more than one type of food dye in the water. If you use two dyes of different densities, you should observe that the paper towel eventually separates the colors based on their differing densities. The Curie Point Permanent magnets all have a temperature at which they will lose their magnetism. This temperature is known as the magnet's Curie Point. This can be demonstrated easily with a few permanent magnets, some paperclips, and a propane torch. The demonstration should only be done by an adult familiar with the safe use of a propane torch. First, take one of the magnets and prove that it is magnetic by using it to pick up a few paperclips. Now, use the propane torch to heat the magnet until it glows red. At that point, it should be past its Curie Point, which is probably around 840 degrees Fahrenheit. Let the magnet cool down, and then try to use it to pick up a paperclip. You should observe that the magnet no longer has any magnetic properties. This is because the heat has rearranged the magnetic particles present within the magnet. Prior to being heated, the particles were all aligned along one axis. Because each particle gave off a magnetic force, they complimented each other and created a large magnetic force along that axis. After being heated, the particles are randomly aligned and oppose one another, canceling out the magnetic force that they once produced entirely. Another fun investigatory project example is the demonstration of magnetism, especially for younger audiences, as this experiment is both easy and safe. For this experiment, you will need a nail, a copper wire, electrical tape, a D-cell battery, and some paperclips. Take the copper wire and wrap it around the nail. Make sure the copper wire is relatively thin and that the wraps do not overlap but are as numerous as possible. Also, leave about five inches of wire on each side of the wrapped nail. Take the two ends that protrude from the nail and run them over to the D-cell battery. Use the electrical tape to secure one end of the wire to the positive terminal of the battery and the other end to the negative terminal. Run the nail over some paperclips to make sure that the magnet is working. As long as the D-cell battery is charged and attached to the nail via wire, a magnetic field will be generated. This demonstrates the property of electromagnetism, as the magnet you will have just made is an electromagnet.
If, as Plutarch asserted, a mind is not a vessel to be filled, but a fire to be kindled, this course equips teachers to kindle the fire. Teachers, particularly teachers of students ages 11 to 14, will learn to inspire and cultivate critical thinking among their students. Though teachers of all subjects will benefit from the course, the focus is on literacy and language arts, mathematics, science, and citizenship education. Pedagogical Strategies for Development of Critical Thinking offers both theoretical and practical tools to help teachers embed critical thinking in each part of the teaching process from the lesson plan to the assessment. Using videos, real-world lesson plans, sample rubrics, and more, participants will define critical thinking and explore how the teaching and learning process changes when shifting from filling vessels with information to fostering independent thinking. Teachers will learn four methodologies that promote critical thinking and study an example of how several academic subjects and methodologies can be connected into a meaningful, multidisciplinary learning activity. Structured as a toolkit which teachers use at their own pace, the course features eight units, each with prompts for reflection and planning exercises that will help teachers bring to life in the classroom what they learn through the course. Whether teachers are new to the notion of critical thinking or have long incorporated it into their classrooms but are looking for some new ideas, Pedagogical Strategies for Development of Critical Thinking can help them kindle the fire of learning in their students. By registering in this course, I agree that my personal information (including full name, country, email address, and details regarding participation in the course) will be sent to the Organization of American States (OAS), for the purpose of being forwarded to the Ministry of Education to which I belong for certification, provided that the ministry has a formal agreement with the OAS for this for this purpose. By registering in the course, I give Udemy permission to share my personal information with the OAS and the Ministry of Education of which I am part for the purposes of certification. Before starting this module, please answer a few questions about critical thinking. We hope this quiz will serve as a space to reflect on your own ideas and to track how your current knowledge could potentially change as you participate in this course. Before we start analyzing the methodologies to promote critical thinking in the classroom, let’s create a common ground about the concept of critical thinking by analyzing some examples, definitions, characteristics and methodologies. Bloom’s taxonomy is a valuable and widely used tool that describes different levels of thinking. The six levels described in Bloom’s taxonomy are divided into higher order thinking and lower order thinking. Critical thinking integrates these two components: 1) Ability to generate information (lower order) 2) Using those skills to guide behavior (higher order) Thinking critically about a set of facts or other information to make an informed decision requires that the thinker go through all six cognitive levels defined by Bloom. Before starting this module, please answer a few questions about collaboration and critical thinking. We hope this quiz will serve as a space to reflect on your own ideas and to track how your current knowledge could potentially change as you participate in this course. In this unit, we will analyze four collaborative methodologies to promote critical thinking in middle and high school classrooms. First, we will cover the importance of collaborative learning and its relationship with critical thinking, and then we will study each methodology, asking, How does it look in the classroom? How can teachers implement it? We will then consider examples of best practices. Before starting this topic, please answer a few questions about the Socratic Seminar. We hope this quiz will serve as a space to reflect on your own ideas and to track how your current knowledge could potentially change as you participate in this course. Before starting this section, please answer a few questions about Academic Conversation Skills. We hope this quiz will serve as a space to reflect on your own ideas and to track how your current knowledge could potentially change as you participate in this course. Before starting this section, please answer a few questions about Project Based Learning (PBL). We hope this quiz will serve as a space to reflect on your own ideas and to track how your current knowledge could potentially change as you participate in this course. Before you start the present module, we would like for you to answer a few questions about Service Learning, hoping this quiz will serve as a space to reflect about your own ideas, and to track how your current knowledge could potentially change as you address the contents of this course. The Inter-American Teacher Education Network (ITEN) is a project led by the Department of Human Development Education and Employment of the OAS. ITEN has created a professional network which empowers its participants to take the lead and learn from each other. One of its goals is to generate change towards the professionalization of teachers and the improvement of education in the Americas.ITEN therefore promotes innovation in the classroom for teachers to learn and adapt their teaching methodologies. ITEN’s activities provide user-friendly and attractive tools for practitioners to engage in collaborative activities in massive numbers. Technology is a cross-cutting theme in the project, used in all activities and a key strategy for delivery of project support to the target audience. La Red de Interamericana de Educación Docente (RIED) es un proyecto liderado por el Departamento de Educación, Desarrollo Humano y Empleo de la OEA. La RIED ha creado una red profesional que permite a sus participantes empoderarse de la iniciativa, liderar y aprender unos de otros. Uno de sus objetivos es generar un cambio en la profesionalización de los docentes y la mejora de la educación en el Américas. La RIED por lo tanto, promueve la innovación en el aula, para que los maestros aprendan y se adapten a nuevas metodologías de enseñanza. Las actividades de la RIED proporcionan herramientas atractivas para los profesionales de ala educación, las cuales les permiten participar en actividades colaborativas, de manera masiva y de uso fácil. La tecnología es un tema transversal en el proyecto, la cual es utilizada en todas las actividades y es a su vez una estrategia clave para apoyar los proyectos de los y las maestras de las Américas y el Caribe llevan a cabo.
You Call THIS Geography?Aug 5th, 2011 | By admin | Category: homeschool instruction, Homeschool resourses by Cindy Wiggers I travel the country teaching parents how to teach geography, use timelines, and many other topics. I love to see their reaction when I cover the 5 themes of geography. All too many of us think geography is only about states and capitals, rivers and mountains, and locating countries and their capitals, but that is only a small part of the study of geography. When I share that you can teach geography from your refrigerator, while riding in the car, or while buying an air conditioner, a whole new world is opened to them. Although I can’t present my entire seminar on geography in the space of this article, I’d like to provide some enlightenment on the subject. Perhaps you, too will say, “You call THIS geography? I can do this!” Although knowing states and capitals is an important part of geography it is by no means the core and foundation of the subject. To deepen your understanding of geography it helps to have a basic introduction to its five themes. Location – Where is it located? Cartographers give location according to degrees latitude and longitude. You’ve seen the globe inscribed with horizontal lines across and arching lines around. Do you know the difference between latitude and longitude? The horizontal lines, or lines of latitude, are also called parallels. They’re measured in degrees north or south of the equator, which is the name given to the parallel at zero degrees. Stay with me, now, it gets easier. The fun stuff follows. The imaginary arching lines that cross through every line of latitude are called lines of longitude, or meridians. They’re numbered by degrees east and west of the Prime Meridian, the name given to the zero line. The Prime Meridian runs through Greenwich, England and separates the earth into eastern and western hemispheres. Assuming you live in the Northern Hemisphere, you can determine your latitude location by using the night sky and a protractor. Extend one arm toward the North Star (Polaris) and the other arm toward the horizon. Using the protractor measure the degree of angle between your outstretched arms. Compare your answer with an atlas. How close were you to the accurate figure in the atlas? Okay so basically I’ve described the longitude, latitude grid. You’ll find it on most maps and will use it often throughout life to locate places on the map. Now there are a number of ways you can introduce and use the grid without even thinking about geography. Have you ever played Stratego®? I suspect you hadn’t thought of it as a geography game. Kids can make their own grid and place items or ships in the grid and make up their own Stratego® kind of game. Label the grid with numbers across and letters down and take turns calling off locations in search of your opponent’s “ships”. Now that you have the grid try another kind of location game. Label the spaces across the top of the columns with numbers and the rows with letters. Write a secret word or message in jumbled order in the grid cells using one space for each letter. Record the locations of these letters in order. Fill in the rest of the spaces with other letters. Now let your opponent use your location list to find the correct letters, which spell out the secret message. Do you know east and west from your own home? Find out by using the sunrise and sunset or from a compass. Once you get it figured out start asking your kids what direction is . . . the library, grocery store, grandma’s house. Let your children give you directions home from the grocery store using east, west, north, south terms instead of left and right. A simple geography moment in the car and the kids never suspect their having a geography lesson! Teach little ones using left and right, above and below terms while putting away their toys or trying to pick up the right item and they’ll easily make the transition to north, east, south, and west later. (i.e. Put the Legos on the right of the Barbies. The book is above the Teddy Bear.) You call this geography? I do. I love to sneak in learning experiences for my kids. They don’t have to know they’re “in school” all the time, even if they are. Place – What characteristics describe this place? What is the terrain, the climate, the soil like in the place? What kind of plants and animals thrive there? What makes this place special? This is all about the characteristics of a place. Learning about geology, weather, agriculture, botany, animal studies, plate tectonics, and water all are really in a large part, studying geography. Find out why the animal lives where it does and what kind of food and conditions are optimal for its reproduction and proliferation and you’ll be learning about geography. Weather patterns are all directly related to the physical terrain, location on the globe, and tilt of the earth. Make a salt dough map of your community, your state, or any place your studies take you and you’re kids will gain a better understanding and have a blast doing it. Form the terrain of the area with the dough, let it dry, and color it with craft paint. While reading novels or books pay close attention to the words used to describe the place of the setting. You’ll find a wide variety of geography terms. While you’re at it assign the terms as vocabulary words and have your students draw a picture of the term accompanied by a short definition. You’ll be amazed at how much more meaning the story has when you have a visual perspective of the place. Study the weather Learn how to predict the weather using natural tools. Did you know that caterpillars hatch into butterflies (or moths) only during a drop in the barometric pressure. When the barometric pressure drops it brings a storm. The winds from the storm fling the new insects far from their home where new sources of food await them. This and many other interesting weather facts and lore are included in a book called, The Weather Companion by Gary Lockhart. Relationship – What’s the relationship between places and people? Did you know that choosing between an air conditioner or swamp cooler is a mini geography lesson? What’s a swamp cooler? Let me explain. I grew up in southern Indiana in the Ohio River Valley where humidity is a given. We cooled our homes by removing excess moisture in the air, and blowing a fan across the process to move cold air into the room. “Close the windows, close the door, the air conditioner’s on.” I often heard my dad say. When I moved with my husband and children to Colorado. We didn’t use an air conditioner to cool the air. Colorado has an arid (dry) climate, very little humidity. We installed a swamp cooler (evaporative cooler) in our home to cool the air. It works on a nearly opposite principle. Instead of removing moisture a swamp cooler pumps moisture into the air, a fan blows the evaporating moisture into the room and voila the room is cooler. Want the rest of the house to cool too? OPEN the window to draw the cool air into that room. Now what does all of this talk about air conditioners and swamp coolers have anything to do with geography? It’s all about the relationship between people and their environment. The characteristics of a place may make people uncomfortable. So what do we do? We make a change to suit our needs. That’s what relationship theme in geography is all about. We build bridges over rivers, construct levees and dams, and roads and buildings. The construction techniques and architecture often have the geography of the area at its central core. How many adobe houses do you see in Delaware? (Not much clay found there to build homes) Why are homes in Belize nearly all made of concrete block? (Hurricanes can’t blow over a concrete block home.) It’s all about the relationship between the place and the people who live there. Watch for relationship between people and the place where they live and you’ve got yourself a mini-geography lesson. What is the main source of outdoor recreation where you live? It depends upon what characteristics your home has and what the people of your community do with it. You can’t ski on mountains in Kansas, but Colorado generates plenty of revenue from the Rocky Mountains. Have you ever noticed that the most highly populated areas of the world are also areas with at least one major river or other water source? Notice how many state capitals or large cities are located on a river. Movement – How do places affect movement? Watch for patterns of movement in the places your historic novels take you. Map the Lewis and Clark Expedition or the lives of the Ingalls family in the Little House series of books. See how many different modes of transportation you can take in one day. On a flight to say, Atlanta you can easily use a car, bus, walk, train, that people-mover thingie (anyone know what it’s called?), and the airplane itself in one day! Did you wonder in the first paragraph about having a geography lesson in your refrigerator? Well here’s where you’ll find out how. Create a game from the ingredients in your refrigerator by having the children make a list of the foods on the shelves and in the drawers. Now, next to each item record WHERE it came from. Not sure? A field trip to the grocery store might be in order. Check out the labels on the packaging and on the boxes in the produce section. Get a big laminated outline map of the world and start marking the places on the map. Draw a line from the place to your home. Learn how these food items traveled from their place of origin to your refrigerator. I bet at least 3 modes of transportation were used. Oranges were trucked from the orchard to the processing plant, maybe flown across country, packed in a refrigerated truck for delivery to your store, in your car, and carried into your house on foot. Why are eggs and milk from near locations while fruits came from a long distance? What kind of climate and soil conditions is needed to grow this food? How are perishable foods transported differently than non-perishable food? . . . Just a bunch of stuff to talk about, while eating that peanut butter and jelly sandwich for lunch. Oh, and while you’re at it, you can challenge your kids to nibble their sandwich into the shape of a state. Or how many different states can you make from one sandwich? And how would you best travel from where you live to visit the state capitol? You can do this same sort of fun activity searching items throughout your home for the “Made in” tag. List the items and where they came from, and learn about the people who may have worked in a factory to product the goods. Regions – How is this place similar? A region is any area with similarity. Regions can be based on cllimate, physical features, culture or just about anything. The Rocky Mountain region spans portions of a half dozen or more states plus parts of Canada. The region is not each of the whole states, only that part that is mountainous. The Great Plains also take a wide swath across Middle America. There is the corn belt, the wheat belt, the Bible belt, the coastal regions, tropical regions, countries, cultures, and more. Learning about different cultures, their traditions, clothing, religions, and languages is all about geography. Collecting stamps or coins from other countries makes for a memorable experience of learning about the world. The symbols on the coins and stamps say a lot about the culture and history of each country. Children Like Me, published by Dorling Kindersley, is a great book for learning other cultures through “meeting” school age children from countries all over the world. Well, if I’ve met my objective this mini-lesson on the 5 Themes of Geography has served to open your eyes to the vast amount of opportunities around you to study geography. There’s a lot to be said for integrating geography wherever it fits in some of these examples I’ve shared with you, but when you’re ready to teach geography as an independent course check out my Trail Guide to Geography series of books. For more information log onto www.geomatters.com and look for Trail Guide to World Geography, Trail Guide to U.S. Geography, and Trail Guide to Bible Geography.
2. What is efficiency We can measure the efficiency of an algorithm in two ways: - Time taken - the number of steps in the algorithm - Space used - the amount of data that the algorithm needs to keep track of at once Time efficiency is not measuring the number of seconds an algorithm takes to complete. Instead, what this means is how many steps there are in the process before the algorithm can complete its task. The number of steps required often depends on the input. For example, it is easier to sort "2 1 3" into ascending order than to sort "3 1 2 5 4 8 3 2 2 1 9". Time efficiency can be calculated in three different ways: - Average time to complete - Worst case time to complete - Best case time to complete Think about a list which contains 1,000 items. The task is to sort this list into ascending order. Below are are the time efficiency measures of two different algorithms: |Efficiency measure||Algorithm A||Algorithm B| |Average time||50 steps||70 steps| |Best case time||10 steps||10 steps| |Worst case time||500 steps||250 steps| Looking at the results, which algorithm is the best? Well it depends on which measure is important for that particular software program. If average time is the most important then algorithm A is the best with 50 steps, but if worst case is more important, then algorithm B is twice as fast This is measuring efficiency in terms of how much memory the algorithm needs to complete its task. For example, while sorting a list, some algorithms need to remember the entire list at all times, while other algorithms only need to remember the bit that they're working on right now. Challenge see if you can find out one extra fact on this topic that we haven't already told you Click on this link: What is algorithm efficiency?
Sudden Infant Death Syndrome (SIDS) SIDS is the leading cause of death among infants between 1 month and 1 year of age. Approximately 1500 infants die from SIDS in the United States each year. SIDS most commonly affects babies between ages 1and 4 months, 90% of cases involve infants younger than 6 months. What is SIDS? Sudden Infant Death Syndrome, also known as SIDS is the sudden, unexpected death of a baby younger than 1 year of age that doesn’t have a known cause even after a complete investigation. According to the American Academy of Pediatrics research shows parents and caregivers can take the following actions to help reduce the risk of SIDS and other sleep-related causes of infant death: - Always place your baby on his or her back for every sleep time. - Always use a firm sleep surface. Car seats and other sitting devices are not recommended for routine sleep. - The baby should sleep in the same room as the parents, but not in the same bed (room-sharing without bed-sharing). - Keep soft objects or loose bedding out of the crib. This includes pillows, blankets, and bumper pads. - Wedges and positioners should not be used. - Pregnant woman should receive regular prenatal care. - Don’t smoke or use illegal drugs during pregnancy or after birth. - Breastfeeding is recommended. - Offer a pacifier at nap time and bedtime. - Avoid covering the infant’s head or overheating. - Do not use home monitors or commercial devices marketed to reduce the risk of SIDS. - Infants should receive all recommended vaccinations. - Supervised, awake tummy time is recommended daily to facilitate development and minimize the occurrence of positional plagiocephaly (flat heads). The Jasper County Prevent Child Abuse Agency and Parents As Teachers can be reached by contacting them at 641-831-4531. Jasper County Health Department recently received a Love Our Kids grant to promote injury prevention and will be doing more education to the public. Find additional information on these websites: - National Instiute of Child Health and Human Development - Baby Center - National Sleep Foundation - Centers for Disease Control and Prevention - AAP News and Journals Gateway Grief Resources: If you or someone you know has lost a baby, the following organizations offer support.
Climate change affects birds in different ways. It can alter distribution, abundance, behaviour, even genetic composition. It can also affect the timing of events like migration or breeding. Climate change can affect birds directly, through changes in temperature or rainfall. It can also lead to increased pressure from competitors, predators, parasites, diseases and disturbances like fires or storms. And climate change can act in combination with other major threats like habitat loss and alien invasive species, making the overall impact worse. Because birds are one of the best studied groups of organisms, we already have the data needed to demonstrate that birds are being affected by climate change. This is occurring in a variety of ways. How climate change is … Continue reading How is climate change affecting birds? Copy and paste this URL into your WordPress site to embed Copy and paste this code into your site to embed
To understand why the seas are so salty, look no further than the water cycle. Simply put, the water cycle begins when fresh water falls from the sky in the form of rain. It eventually ends up in rivers, lakes and oceans, where it soon evaporates to form clouds and repeat the cycle. If you dig a little deeper into each stage of the water cycle, you'll see just how salt gets into the mix. That fresh water that falls as rain isn't 100 percent pure. It mixes with carbon dioxide in the atmosphere on the way down, giving it a slightly acidic quality. Once it reaches the Earth's surface, it travels over land to reach area waterways. As it passes over the land, the acidic nature of the water breaks down rocks, capturing ions within these rocks and carrying them to the sea. Roughly 90 percent of these ions are sodium or chloride — which, as we know, form salt when they band together [source: NOAA]. Fresh water that reaches the ocean evaporates to form clouds. However, the sodium, chloride and other ions remain behind, where they accumulate over time to give the sea its characteristic saltiness. Hydrothermal vents on the ocean floor release additional dissolved minerals, including more sodium and chloride, further contributing to the briny nature of the sea [source: USGS]. What's surprising is just how much the salt from runoff and underwater vents has built up since oceans formed. Dissolved salts make up 3.5 percent of the weight of all ocean water [source: USGS], and if you could remove this salt from the sea, it would form a layer 500 feet (153 meters) thick over all of Earth's land mass. That's about the height of a 40-story building [source: NOAA]. One question, though: If the seas get their salinity from runoff, why do lakes remain relatively salt-free? For most lakes, water flows both in and out of the lake via rivers and streams. Salt ions that end up in the water are carried out, keeping the lake fresh. These ions eventually end up in oceans, which serve as a "dumping ground" of sorts for runoff and the minerals it contains. Bodies of water with no outflow, such as the Dead Sea or the Great Salt Lake in Utah, maintain a level of salinity on par with or higher than that of the ocean [source: Jackman].
The Great Pyramid of Giza (called the Pyramid of Khufu and the Pyramid of Cheops) is the oldest and largest of the three pyramids in the Giza Necropolis bordering what is now El Giza, Egypt. It is the oldest of the Seven Wonders of the Ancient World, and the only one to remain largely intact. Egyptologists believe that the pyramid was built as a tomb for fourth dynasty Egyptian Pharaoh Khufu (Cheops in Greek) over an approximately 20 year period concluding around 2560 BC. Initially at 146.5 metres (480.6 ft), the Great Pyramid was the tallest man-made structure in the world for over 3,800 years, the longest period of time ever held for such a record. Originally, the Great Pyramid was covered by casing stones that formed a smooth outer surface; what is seen today is the underlying core structure. Some of the casing stones that once covered the structure can still be seen around the base. There have been varying scientific and alternative theories about the Great Pyramid's construction techniques. Most accepted construction hypotheses are based on the idea that it was built by moving huge stones from a quarry and dragging and lifting them into place. There are three known chambers inside the Great Pyramid. The lowest chamber is cut into the bedrock upon which the pyramid was built and was unfinished. The so-called Queen's Chamber and King's Chamber are higher up within the pyramid structure. The Great Pyramid of Giza is the only pyramid in Egypt known to contain both ascending and descending passages. The main part of the Giza complex is a setting of buildings that included two mortuary temples in honor of Khufu (one close to the pyramid and one near the Nile), three smaller pyramids for Khufu's wives, an even smaller "satellite" pyramid, a raised causeway connecting the two temples, and small mastaba tombs surrounding the pyramid for nobles. Many alternative, often contradictory, theories have been proposed regarding the pyramid's construction techniques. Many disagree on whether the blocks were dragged, lifted, or even rolled into place. The Greeks believed that slave labour was used, but modern discoveries made at nearby worker's camps associated with construction at Giza suggest it was built instead by tens of thousands of skilled workers. Verner posited that the labor was organized into a hierarchy, consisting of two gangs of 100,000 men, divided into five zaa or phyle of 20,000 men each, which may have been further divided according to the skills of the workers. One mystery of the pyramid's construction is its planning. John Romer suggests that they used the same method that had been used for earlier and later constructions, laying out parts of the plan on the ground at a 1 to 1 scale. He writes that "such a working diagram would also serve to generate the architecture of the pyramid with precision unmatched by any other means." He also argues for a 14 year time span for its construction.
Students and teachers work together to create teams. For this competition, a team is defined as a group of up to six students from the same school with one teacher, advisor or mentor. (Exceptions are possible for informal science educators and educational groups: please ask for more information.) Middle school students must be in grades six through eight; high school students must be in grades 9 through 12. Teams must submit a registration form by the date on the dates and deadlines page. Each middle school teacher and high school teacher may register up to two teams, and schools that are combined middle and high schools may register two teams in each category. Multiple teams and classroom entries will be considered upon request. Teams will receive additional materials and instructions after they register. An information packet will be sent to each registered team with important dates, details and project formats for the competition. QuikSCience Challenge staff and USC mentors will also be available to each team to help with ideas and access to existing curricular materials. Each team must submit a final list of team members by the date on the dates and deadlines page. (The listing of "alternate members" is no longer permitted.) To compete in the QuikSCience Challenge, student teams must develop and submit a portfolio that documents all of the following components. 1. A lesson plan for an aquatic scientific topic Students work as a team to create and teach at least one new lesson plan that addresses a topic related to the ocean, a freshwater environment or a watershed. The students should design the lesson plan to fit into their school's science curriculum for their grade or for a lower grade classroom. The team's teacher or advisor will provide guidance and oversight for content accuracy. The lesson plan should be related to the team's community service project. For ideas on how to teach: The team's teacher or advisor can ask students to reflect on how they learn. Think of your favorite teacher. How did he or she teach? What activities did you and your classmates enjoy? How did the teacher speak? What did you and your classmates find most interesting in the lessons? What resources or equipment did your teacher use? What is your favorite way to learn? How do you learn best? What do you enjoy about school—music, drama, small group work, problem solving or something entirely different? The team's teacher or advisor can help the students develop their own strategies for teaching and learning by encouraging them to create a lesson and teach it the way they would like to be taught. For more details, see guidelines and format for the lesson plan (PDF). The description of the lesson plan should include the following: - How it fits into the class' science curriculum - What California state science education standards it meets - What grade levels it serves - How many times it was presented - The total number of students who received it - How the students responded to the lesson plan - How the lesson plan was modified in response to student reactions - Any improvements that could be made to it For supplemental resources and examples of previous winning lesson plans, click here. 2. Community Service The student team should lead a larger group of peers and/or members of the community in a public service activity involving the oceans or anything that affects the ocean and/or freshwater environments (like storm-water runoff). The students should be creative and link their community service project to educational activities in science classes at their school. If this component of the team's portfolio is part of an ongoing community service project at the school, the students should describe how their team expanded it. The students' description of the community service activity should include the following information: - Date and location of the event or activity - Benefit to community - Number of people reached - Relationship to the school - Surprises or outcomes 3. A Solution for an Environmental Challenge As the students work on the topic for their project, they should look for solutions to environmental problems connected to it. For example, if the topic addresses the dangers of polluted runoff into the ocean, what are creative solutions to prevent runoff, to help animals affected by the runoff, or to reclaim beaches from runoff damage? The part of the student team's portfolio should describe the problem, a solution to it and the action that would be required to make their solution a reality. For high school teams, this should be a 1-2 page document; for middle school teams, this should be a half page to a full page in length. For more information, see: FOR HIGH SCHOOL TEAMS ONLY 4. Write a Proposal for an Ocean or Aquatic Science Research Project The project can be on any science question or topic that interests you that is related to the ocean, an aquatic environment or a watershed. The proposal should describe the issue or question that you are addressing, what you hope to learn, and how you would go about completing it, including the methods and equipment that you would use. Consider the environmental challenge(s) that relates to your topic. Proposals should be a total of 5-7 pages in length. See guidelines and format for writing a research proposal. *Note: This requirement is only for a proposal from the high school student team. The team does not have to conduct the research itself. The research proposal requires some use of bibliographic citation. Check here for links to online guides. Each team member of the student team and the team's teacher or advisor must write a one-paragraph reflection on his or her personal experience in the QuikSCience Challenge. These reflections can be combined into one Word document. 6. Creative Presentations of the Project The student team should create a visually interesting presentation that documents its project. Please remember that the judges must be able to view a team's entire creative presentation within 10 minutes to select winners. Additional materials will not be considered. Each team in the QuikSCience Challege must submit their entire presentations in 2 formats: - A printed-paper version (double-sided preferred) of all project components (photos, photograph/copy of your posters, physical models, websites, etc.) - A digital version on two disks (either on CD / DVD) Please present a duplicate copy of each disk (four disks in all). Or submit (2) flash drives with everything on each drive. Disk 1: “Text Documentation” must include the Portfolio Checklist, Project Summary, Lesson Plan, Community Service, Research Proposal (high school only), Environmental Challenge, and Student Reflections. (Submit two copies of this disk.) Disk 2. “Presentation” (Disk 2) must contain the team's unique project presentation, photographs, PowerPoint presentations, etc. Additionally, we require a 2-minute Open House video clip. (Submit two copies of this disk.) Notes about presentation materials Physical models or presentation boards should not exceed 30" x 48." Individual photographs in digital format should not exceed 100 KB. PowerPoint presentations should not exceed 10 MB. Videos should not exceed 10 MB. Please use formats that work with RealPlayer, QuickTime or Windows Media. Video presentation are best linked to PowerPoint presentations using YouTube or Vimeo using a password. If DVDs are used for the digital versions of the portfolios, use only conventional 12cm discs. (Do not use 8cm discs known as MiniDVDs.) Please remember that CDs are preferred. Whenever possible, please limit the use of paper and plastic. For example, use both sides of sheets of paper for printing. Also, please do not use plastic sleeves to cover the pages in the printed-paper version of the project portfolio. 7. NEW! Create a 2–Minute Open House Video Clip: Please include a short (2-minute maximum) video clip, or movie, with your school name, location, and have the team members present a brief description of the overall project and highlight any special event. This will be played at the Open House Awards Ceremony for everyone to see and may be posted to the QuikSCience You Tube page. USC mentors and staff are prepared to assist you in preparing this 2-minute video clip if needed. 8. A portfolio checklist and project summary These documents record the details of each component of the project. The Portfolio Checklist and the Project Summary must be in the printed version of the portfolio and on the "Text Documentation" disk of the digital version. 9. Deliver Presentation of the Project The project materials must be received by mail (not just postmarked) or delivered to USC Wrigley Institute for Environmental Studies, AHF 410, on the main USC campus by 5 p.m. on Monday, February 11, 2013. Please allow extra time for traffic if you plan to deliver the presentation in person.
Education for all is a movement to provide education to all children and youth too. By UNESCO this movement was launched in the world education conference. In this conference, the speakers pressurise to reduce the illiteracy and universalize the primary education. After some years, good results were seen as in many countries education for all goal was achieved. Six key goals were identified in meeting the learning needs of children. UNESCO, civil agencies, government, development agencies and media are all the partners that are working to achieve the goal of education for all. Goals of the education for all: Care and education for the children: In this goal care and education for the young children were given the most preference. They will support the kids and their families and also help children in all areas including physically, socially and emotionally. Focus especially on particularly who are vulnerable, such as children living in the poor conditions, the orphans and minorities and also the girls. Free primary education for all: Their second goal was to provide people with free primary education for all. The primary education will be free of charge, and it would be compulsory for all to attend schools at primary level. Children belonging from the minorities would be given special attention because they are facing difficult circumstances. After some years increase in literacy rate was seen. The number of children going to school was increased. Education for all movement main goal to educate them was given special attention, and that result good for the movement. The third goal of education for all was to promote the learning skills in children. They emphasis on the learning needs of the people. In this goal equal amount of learning programs were offered to boys and girls. They think that government was not giving priority to adults in learning new skills. To live successful life skills are needed. Special budgets should be announced for this purpose. The main aim of the movement was to make the youth skilful. Increase the literacy rate: One of their goals was to increase the literacy in adults. They want to increase it about 50 percent, than before. They said women have right to get an education, so special attention was given to them. Opportunities were given to adults for learning. Equality between the genders: The fifth goal was the gender equality in the education. An equal number of girls and boys should be enrolled in primary and secondary education. The boys and girls should have an equal opportunity to learn and have benefits of education. The Last goal of education for all is to give quality education. They want to improve the education system. Each person can learn skills that are necessary for a successful life. A quality education can change the person’s condition. These were the six goals of education for all movement. This movement aims to educate each person so that they can live a good life.
Nature is home to numerous wonderful organisms, each blessed with a set of unique features that ensures their survival against harsh elements. One prime example is the water bear, an eight-limbed, water-dwelling micro-animal that inhabits a range of diverse ecosystems, from mountaintops to tropical forests and even in the depths of oceans. Also known as tardigrades, these microscopic invertebrates are known for their ability to survive the most extreme natural conditions, including extended desiccation, incredibly low and high temperatures of around −272.222 °C (−458.000 °F) and 149 °C (or 300 °F), near-100 percent fluid loss, powerful ionizing radiation and even possibly the vacuum present in outer space. Discovered back in 1773, tardigrades have been the subject of numerous scientific research. A recent project, for instance, has revealed that these tiny organisms turn into glass to protect themselves against extreme desiccation. When faced with severe natural conditions, they produce a particular type of “bioglass” that in turn preserves the essential molecules and proteins present inside, until they are restored back to life. According to the scientists, the discovery could pave the way for special drought-resistant crops as well as longer-lasting vaccines. A few months back, a team, led by Juan de Pablo from the University of Chicago, shed more light on the type of glass produced internally by water bears to combat dehydration. While the scientists are still working to unravel the exact mechanism behind the process, their research has revealed that the glass ensures the organisms’ survival even when they have lost nearly 97-percent of body water. What is more, this new type of glass could help enhance the efficiency of a variety of electronic devices, including optical fibers, light-emitting diodes and photovoltaic cells. Speaking about the breakthough, de Pablo said: When you remove the water, they very quickly coat themselves in large amounts of glassy molecules. That’s how they stay in this state of suspended animation. We have been able to generate new glasses with new and unknown properties through this combination of experiment, theory, and computation. As part of yet another research, biologist Thomas Boothby and his team from the University of North Carolina at Chapel Hill have come one step closer to untangling the mysteries surrounding these microscopic organisms. For the first time ever, the scientists have managed to identify the tardigrade-specific genes responsible for the production of the bioglass. The genes, according to the researchers, code specific types of proteins known as intrisically disordered proteins (IDPs), which in turn create the highly-specialized glass. The research, recently presented at the American Society for Cell Biology, focuses on the intrinsically disordered proteins. As their name suggests, these proteins are incredibly flexible and shapeless during normal conditions. With the coming of extremely dryness, however, they get produced in larger amounts, following which they rearrange themselves into solid bioglass. These glass-like structures envelope essential cellular components, molecules and other proteins, thus preventing from falling apart during severe desiccation. When exposed to water again, the glass melts, restoring the organisms back to life. For the research, the team used genetic engineering to lower the levels of IDPs present in the water bears. As expected, the organisms exhibited reduced capacity to withstand dehydration, but remained fairly uneffected by other harsh elements, such as extreme cold. This indicates that the creatures possess different features for survival against different types of extreme stresses and conditions. To further test their efficacy, the scientists injected these proteins into human epithelial cells (or HeLa). They explained: [W]e found that when expressed in HeLa cells, desiccation induced a relocalization of these IDPs, which under hydrated conditions appeared diffuse throughout cells’ cytoplasm, to specific cytoplasmic organelles – suggesting that individual proteins are targeted to different parts of cells, perhaps protecting specific cellular compartments. We found in vitro these proteins formed biological glasses when dried. What is more, the team genetically programmed certain species of yeast and bacteria to generate IDPs. When placed under prolonged desiccation, these microscopic organisms were found to be more capable of surviving the extreme dry conditions. Once fully studied, the mechanism could help develop healthy, drought-tolerant crops. As Boothby has pointed out, it could also be used to prevent certain enzymes from drying out, thus reducing the amount of money spent on storing vaccines. The team added: [A]round 80 percent of the costs of vaccination programs in developing countries comes from having to keep vaccines cold… The enzyme [lactate dehydrogenase] loses its activity when dried out. But when the researchers mixed the enzyme with the glass proteins before drying, the enzyme bounced back to normal activity when rehydrated. Mixing in water bear proteins after drying didn’t help, indicating that the glass proteins need to encase other molecules to protect them.
Why breakfast is important Breakfast gives children the energy they need to handle their busy days. Children who eat a healthy breakfast go longer without feeling hungry. This means they can concentrate on playing, learning, remembering and solving problems better. Research shows that a healthy breakfast can help children perform better at school. Breakfast eaters also tend to: - have better school attendance than those who regularly skip breakfast - be more emotionally healthy than non-breakfast eaters - be less likely to snack on sugary or fatty foods, which helps them stay at a healthy weight. What a healthy breakfast looks like A healthy breakfast needs to have a balance of carbohydrates, protein and fat to keep energy levels steady all morning. For babies and toddlers, breakfast might be rice cereal, milk and fruit. School-age kids and teenagers might like to choose from porridge, low-sugar wholegrain cereal, a boiled egg, an omelette, wholegrain toast, fruit and yoghurt. Choosing healthy foods and eating enough breakfast will help your child get through the morning. Highly processed, sugary cereals won’t give her as much energy and will make her feel hungry sooner. Encouraging reluctant breakfast eaters You’re an important role model when it comes to eating. Showing your kids that breakfast can be yummy and that it’s an important part of your day is a good way to encourage them to eat it. You can talk about its benefits with them too. Here are more ways to encourage good breakfast habits: - Make breakfast a time to sit and eat with your kids. Being a good example is a powerful way to change their habits. - If your child says he’s not hungry in the morning, try making a healthy smoothie, with milk, yoghurt and fruit, instead of a more traditional breakfast ‘meal’. - Another option is for your child to eat a small meal at home, such as a small bowl of cereal or a piece of fruit. You can then give your child a healthy snack to eat before school starts – for example, a sandwich, muesli bar or wholegrain fruit bun. - If a hectic morning schedule gets in the way of breakfast, try setting your child’s alarm 10 minutes earlier, or packing your child’s bag and laying out his clothes the night before. You could even get the next day’s breakfast ready at night, putting dry cereal in a covered bowl, or placing toppings like sliced fruit, nuts or raisins in a muffin tray. Fussy eaters often respond better at meal times if the food is more interesting than usual. Young children love toast or fruit, and older kids can occasionally prefer ‘non-breakfast’ foods, like leftover pasta. - Older children and teenagers might refuse to eat breakfast as a way of showing their independence. Try not to make a big deal about this. You could suggest your child takes a piece of fruit, a smoothie or a muffin to have on her trip to school instead. She might also like to choose her own healthy breakfast options when out shopping. A recent study showed that children who skipped breakfast were more likely to have parents who didn’t encourage them to eat in the morning. Nutritional benefits of breakfast Human bodies make energy from carbohydrates, breaking them down into a sugar called glucose. After a night without food, your body has used up this glucose. It starts to use stores of energy from your muscles instead, like glycogen and fatty acids. This is why we need a fuel top-up before we tackle the day. Eating breakfast will give your child energy and get his metabolism started. It will help his body use the food he eats more efficiently throughout the day. Also, children who miss breakfast don’t ‘catch-up’ on those missed nutrients during the rest of the day. getting your child to eat well ‘Your children see what you do, and they want to do it too.’ Many of the mums and dads in this short video say that their best advice for getting young children to eat well is to eat well yourself. Their other tips for getting kids to eat a balanced diet include eating the same things as your children, eating together, and involving children in meal preparation. The video also covers how much food children should eat and common concerns you might have about your child’s weight.
2. Cause Allergies To help answer this question, let us look at some examples of common household. A few months after the arrival of a cat in the house, the father began to get itchy eyes and episodes of sneezing. One of three children develops coughing and wheezing, especially when the cat into her bedroom. Mother and two other children did not experience any reaction to the presence of cats. How do we explain this? The immune system is a defense mechanism organized by the body against foreign invaders, particularly infections. His job is to recognize and react to these foreign substances, called antigens. Antigens are substances that can cause the production of antibodies. Antigens may or may not lead to an allergic reaction. Allergens are certain antigens that cause allergic reactions and the production of IgE. The purpose of the immune system is to mobilize its strength on the scene of the attack and destroy the enemy. One way to do this is to create protective proteins called antibodies specifically directed against certain foreign substances. These antibodies, or immunoglobulins (IgG, IgM, IgA, IgD), are protective and help destroy foreign particles to attach themselves to the surface, thus making easier the other immune cells to destroy it. However people who are allergic, develop a specific type of antibody called immunoglobulin E, or IgE, in response to a specific foreign substance that is generally harmless, such as cat dander. In summary, immunoglobulins are a group of protein molecules that work as antibodies. There are 5 kinds of different types: IgA, IgM, IgG, IgD, and IgE. IgE is the allergy antibody. In the example of animals cats, father and youngest daughter developed IgE antibodies in large quantities directed against cat allergen, the cat dander. Father and daughter are now sensitized or prone to develop allergic reactions on subsequent and repeated exposure to cat allergen. Typically, there is a period of "sensitization" ranging from months to years before an allergic reaction. While it may occasionally occur an allergic reaction in the first exposure to the allergen, certainly before there was contact so that the immune system is reacting this way. IgE is an antibody that is owned by all of us in small quantities. People who are allergic, however, produce IgE in large amounts. Normally, this antibody is important in protecting us from parasites, but not from cat dander or allergens. During the sensitization period, cat dander IgE is overproduced and coats certain cells that contain a potentially explosive chemicals. These cells are capable of causing an allergic reaction on subsequent exposure to dander. This is caused by the reaction of the cat dander with the dander IgE irritates the cells and leads to the release of various chemicals, including histamine. These chemicals, in turn, causes inflammation and allergy symptoms are typical. This is how the immune system becomes exaggerated and disiapakn to cause an allergic reaction when stimulated by allergens. When exposure to cat dander, mother and two other children generate classes other antibodies, none of it cause an allergic reaction. In the family members who are not allergic, dander particles are eliminated by the immune system and the cat has no effect on them.
A naval ship is a military ship (or sometimes boat, depending on classification) used by a navy. Naval ships are differentiated from civilian ships by construction and purpose. Generally, naval ships are damage resilient and armed with weapon systems, though armament on troop transports is light or non-existent. Naval ship classification is a field that has changed over time, and is not an area of wide international agreement, so this article currently uses the system as currently used by the United States Navy. - Aircraft carrier – ships that serve as mobile seaborne airfields, designed primarily for the purpose of conducting combat operations by Carrier-based aircraft which engage in attacks against airborne, surface, sub-surface and shore targets. - Surface combatant – large, heavily armed surface ships which are designed primarily to engage enemy forces on the high seas, including various types of battleship, battlecruiser, cruiser, destroyer, and frigate. - Submarine – self-propelled submersible types regardless of whether they are employed as combatant, auxiliary, or research and development vehicles which have at least a residual combat capability. - Patrol combatant – combatants whose mission may extend beyond coastal duties and whose characteristics include adequate endurance and sea keeping providing a capability for operations exceeding 48 hours on the high seas without support. - Amphibious warfare – ships having organic capability for amphibious assault and which have characteristics enabling long duration operations on the high seas. - Combat logistics – ships that have the capability to provide underway replenishment to fleet units. - Mine warfare – ships whose primary function is mine warfare on the high seas. - Coastal defense – ships whose primary function is coastal patrol and interdiction. - Sealift – ships that have the capability to provide direct material support to other deployed units operating far from home base. - Support – ships, such as oilers, designed to operate in the open ocean in a variety of sea states to provide general support to either combatant forces or shore based establishments. (Includes smaller auxiliaries which, by the nature of their duties, leave inshore waters). - Service type craft – navy-subordinated craft (including non-self-propelled) designed to provide general support to either combatant forces or shore-based establishments. In rough order of tonnage (largest to smallest), modern surface naval ships are commonly divided into the following different classes. The larger ships in the list can also be classed as capital ships: - Aircraft carrier - Helicopter carrier - Heavy cruiser - Light cruiser - Patrol boat - Fast attack craft Some classes above may now be considered obsolete as no ships matching the class are in current service. There is also much blurring / gray areas between the classes, depending on their intended use, history, and interpretation of the class by different navies. - List of naval ship classes in service - List of auxiliary ship classes in service - List of submarine classes in service - "United States Naval Recognition Training Slides-Grand Valley State University Archives and Special Collections".
Bacteria From Human Feces is Behind Deadly Disease in Coral Elkhorn coral infected with white pox. What’s the News: Over the past decade, diseases, pollution, and warming waters have put coral populations across the globe in a dramatic decline. In an extreme case, the population of elkhorn coral, considered one of the most important reef-building corals in the Caribbean, has decreased by 90–95 percent since 1980, partly due to a disease called white pox. Now, scientists have traced this lethal disease back to humans. Human feces, which seep into the Florida Keys and the Caribbean from leaky septic tanks, transmit a white pox-causing bacterium to elkhorn coral, researchers report in the journal PLoS ONE. “It is the first time ever that a human disease has been shown to kill an invertebrate,” ecologist James Porter told Livescience. “This is unusual because we humans usually get disease from wildlife, and this is the other way around.” What’s the Context: - Serratia marcescens is a bacterium found in the intestines of humans and many other animals. Resistant to many types of antibiotics, S. marcescens is known to cause respiratory problems and urinary tract infections in people. - The bacteria can also infect coral—in 2002, Porter and his colleagues learned that S. marcescens causes white pox disease in elkhorn coral. The contagious disease kills coral tissue, exposing patches of its white skeleton beneath. The researchers originally suspected that the S. marcescens infecting elkhorn coral came from human feces, but they lacked the scientific evidence needed to prove it. - White pox is but one of over 18 diseases threatening coral. Scientists have identified only a handful of the diseases’ sources (via Livescience). How the Heck: - The researchers spent years gathering samples of S. marcescens from elkhorn coral, from wastewater collected from a treatment plant in Key West, and from other animals, such as the coral-eating snail Coralliophila abbreviate. - The team then exposed healthy elkhorn coral fragments to the various bacteria samples they collected. The fragments exposed to S. marcescens from wastewater or from other, already-infected elkhorn corals began showing signs of white pox within as little as four days. - The research also suggests that the corallivorous snail and another type of coral, Siderastrea siderea, may play a role in spreading white pox. One of the coral fragments infected with S. marcescens from the snail developed signs of the disease within 13 days, while S. marcescens isolated from the other coral species caused white pox in 20 days. The Future Holds: In 2001, Key West installed an advanced wastewater treatment system capable of reducing the bacterium to undetectable levels, and has not had a new case of white pox since, the researchers told ScienceNOW. They hope that the new study will encourage communities throughout the Caribbean to upgrade their wastewater management facilities, too. Image courtesy of James W. Porter/University of Georgia
Community Systems Lesson Plan Community Systems Workshop Facilitators: Team Leaders Length: ~30 minutes Vision: Students will understand the community forces and influences at play in relation to violence so that they will be able to harness the community to fight violence and other negative issues. Objectives: 1. Given the opportunity to revisit and discuss the definition of Community Systems and/or how social issues become cyclical, students will BRIEFLY identify (individually or in teams) 2-3 ways that elements of Community Systems might play a role in perpetuating the cycle of violence. 2. Students will be able to discuss how the cycle could be stopped through use of Community Systems. Materials: Workshop descriptions, markers, poster paper (one with description of “community systems” on it), 5 milk crates Agenda: Warm-up (10 min; obj 1) Introduction (2 min; obj 1) The Facts (10 min; obj 1) Analysis (5 min; obj 1) Debrief (5 min; objs 1, 2) Description: Warm-Up, 5 min Crate Balance: Each team will get a crate, and the object is to get everyone to balance on the crate, with no one’s feet on the ground. Introduction, 2 min “Can anyone guess what the point of that fun game was? Well, we’ve talked before about the importance of the community to ourselves and the experiences we have. Each team’s members represented different forces in the community. All of the forces work together to produce a collective result. In this case, the result was positive; however, this is not always how it goes.” The Facts (obj 1), 10 min Ask the students, “What factors in a community encourage violence?” (These include poverty, drug/alcohol abuse, history/culture of violence, racism…) Next, ask the students to define “community systems.” Call for a few guesses. Community System: Organizations or people with enough authority to impact the people living in the community. Examples of community systems: education, church, the mayor, local/national government and laws, local business, health systems, history, personal support; addictions, child care, community leaders; block captains. (Similar to institutions.) Make sure the students understand the definition of a community system, and then write a few examples down on the poster paper. Analysis (obj 1), 10 min “Now that you all have identified various community systems, let’s consider how they are involved with violence, and how these community systems have a role in perpetuating or confronting violence. First of all, can someone define perpetuating? To prolong the existence of something. To make a thing continue in the future; keep it going; and sometimes make it worse. If they perpetuate, how could they stop perpetuating violence? If neither, then what could they do to start confronting violence?” To make sure that the students understand community systems and their roles: “We understand that the concept of community systems can be confusing, so we have a few examples for you to get a better idea.” “For instance, does anyone know what hazing is? It’s a ritual at some colleges, and before students can be accepted into a fraternity or sorority, they are forced to undergo physical pain or mental anguish by current members. Do you understand what the community system is in this case? It’s the college. Some colleges don’t have harsh policies regarding hazing, and therefore perpetuate violence, but other colleges do not allow fraternities that practice hazing.” “The next example is more complex. A well known and unfortunately common kind of violence is domestic abuse. Does everyone understand what domestic abuse is? Well, there are many reasons for abuse. For instance, a person might feel that he or she is inadequate based on society’s standards; maybe they feel like they don’t earn enough money. What’s the community system there? They may have also grown up in a home where abuse was practiced; the home is the community system in that case. There are also community systems that may perpetuate the violence; for instance, a police department may not allow a person to create a restraining order, or a hospital employee may not report injuries caused by abuse that they see. Do you see how all of these systems tie together, like in the crate warm-up?” Debrief (obj 1,2) TLs, 5 min Once the students understand this, each will identify which community system they believe does the most to confront violence, and why. Also, each will identify the community system that perpetuates violence the most and what needs to happen for that to stop, and why. Once they come up with their answers, have a few share what they said and why.
ASU/PV New Media Class A Begginer's Guide to Integratin TechnologyJackson, Lorrie (2005). “A Beginner's Guide to Integrating Technology.” Education World. http://www.educationworld.com/a_tech/tech/tech130.shtml The article, “A Beginner's Guide to Integrating Technology,” by Lorrie Jackson is mainly for teachers who have little knowledge on how to integrate technology in the classroom. Jackson states that using technology in the classroom does not mean that the teacher replaces what she teaches, but it is used as a tool to enhance the teaching. “Integrating technology simply means using computers within the existing curriculum.” Jackson gives several tips on determining how to integrate technology into the curriculum. First, the teacher takes a quick assessment of where she is in terms of technology followed by an assessment of her resources. It is important that the teacher knows what are the students’ skills and attitudes as well as her skills and attitudes towards technology. She also needs to know if she has access to computers and resources such as software and training. Next, the teacher needs to set goals and plan accordingly. In order to accomplish those goals, she must find peers who already integrate technology and learn from them; and finally, she needs to get informed by visiting professional organizations’ Websites regularly. The next step is to get trained. Most schools or districts offer technology training for free. According to Jackson, once the teacher is motivated enough and feels comfortable with technology, she is ready for integration. Jackson’s suggestion is to take one step at a time. One-way is to begin using technology to manage the class: grading, patent-teacher communication via e-mail and surfing the Internet for lesson plans. As the teacher feels comfortable, she introduces one content area at a time and hand picks relevant, age appropriate Web sites such as 42eXplore, Education World site reviews, Ed Index, and Motivate While You Integrate Technology: Online Assessment. Lastly, the teacher must learn to determine when technology helps and when technology might hinder learning and plan accordingly. I enjoyed the simplicity of the article, “A Beginner's Guide to Integrating Technology.” It provides great ideas on how to surf the Internet to find age appropriate activities for the classroom. The article is geared toward teachers who are new to computers and feel uncomfortable integrating technology in the classroom. Yet, seasoned teachers can also benefit by visiting some of the Web sites that the author provides. I visited some of the Web sites that Lorrie Jackson recommends and they are filled with activities and lesson plans for elementary, secondary and special education. These are two examples of Web sites that I visited and liked: http://school.discovery.com/quizcenter/quizcenter.html and http://www.pitt.edu/~poole/tableRef.htm. There was a lesson on language arts that caught my fancy and I will incorporate in my lesson plans. I am determined to learn how to make better use of technology in my classroom. I am aware of the great benefits that my students will gain by it. But more important, I can make teaching more fun and meaningful!
What is 'Appreciation' Appreciation is an increase in the value of an asset over time. The increase can occur for a number of reasons, including increased demand or weakening supply, or as a result of changes in inflation or interest rates. This is the opposite of depreciation, which is a decrease over time. BREAKING DOWN 'Appreciation'The term is also used in accounting when referring to an upward adjustment of the value of an asset held on a company's accounting books. The most common adjustment on the value of an asset in accounting is usually a downward one, known as depreciation, which is typically done as the asset loses economic value through use, such as a piece of machinery being used over its useful life. While appreciation of assets in accounting is less frequent, assets such as trademarks may see an upward value revision due to increased brand recognition. Types of Appreciation This term can be used to refer to an increase in any type of asset, such as a stock, bond, currency or real estate. For example, the term capital appreciation refers to an increase in the value of financial assets such as stocks, which can occur for reasons such as improved financial performance of the company. Just because the value of an asset appreciates does not necessarily mean its owner realizes the increase. If the owner revalues the asset at its higher price on his financial statements, this represents a realization of the increase. Similarly, capital gain is a term used to denote the profit achieved by selling an asset that has appreciated in value. Another type of appreciation is currency appreciation. The value of a country's currency can appreciate or depreciate over time in relation to other currencies. For example, when the euro was established in 1999, it was worth approximately $1.17 in U.S. dollars. Over time, the euro has risen and fallen versus the dollar based on global economic conditions. When the U.S. economy began to fall apart in 2008, the euro appreciated against the dollar, to $1.60. Beginning in 2009, however, the U.S. economy started to recover, while economic malaise set in across Europe. Consequently, the dollar appreciated versus the euro, with the euro depreciating in relation to the dollar. As of July 2016, the euro exchanges for $1.10 in U.S. dollars. Appreciation vs. Depreciation Certain assets are given to appreciation, while other assets tend to depreciate over time. As a general rule, assets that have a finite useful life depreciate rather than appreciate. Real estate, stocks and precious metals represent assets purchased with the expectation that they will be worth more in the future than at the time of purchase. By contrast, automobiles, computers and physical equipment gradually decline in value as they progress through their useful lives.
Strong reading skills are essential for success in school. The approaches I use give students the tools to develop reading proficiency. These programs develop the two essential components of reading: decoding skills and comprehension abilities. Reading is smooth and fluid when students master decoding skills, in other words, when students have learned which sound is represented by a given letter. The key to helping students develop reading decoding skill is to use methods that fully engage the brain. Methods that completely engage the brain also totally engage the senses. My approach fully activates a student’s learning abilities using multi-sensory techniques based in Orton-Gillingham methods. I combine systematic instruction in sound-symbol relationships with drills that develop a student’s attention and automatic recognition of sounds and letters so that a student will develop the foundational skills essential for reading. This method improves phonemic awareness skills (such as sound blending and rhyming) which are essential to becoming a strong reader. My strong comprehension programs teach students at all grade levels how to understand and learn more from what they read. I help students develop the skills to focus on the thinking processes needed to understand text. I teach comprehension strategies that promote understanding before, during and after reading to promote significant increases in comprehension skill. Some of the important components of comprehension improved include: - Using pre-reading techniques to warm up for reading - Connecting prior knowledge to the current material - Acquiring new vocabulary words - Identifying the structure of reading material (for example, cause and effect, description, compare and contrast, etc.) - Adjusting the reading rate for the difficulty of material and purpose of reading - Knowing whether the material is comprehended and how to adjust strategies when material is not being adequately understood - Creating visual images of what is being read - Focusing attention by asking questions while reading – in other words, being an “active” reader - Using techniques such as graphic organizers, outlines and color-coding to increase understanding and anchor information in memory To download an article that discusses the advantages of working with an educational therapist over a tutor to develop your child’s reading proficiency, click on the link below.
TABLE OF CONTENTS 1.1 Statement of general problem 1.2 Objective of the study 1.3 Statement of Hypothesis 1.4 Significance of the study 1.5 Limitation of the study 1.6 An overview of the organization 2.1 Definition of cost accounting 2.2 Standard cost introduction 2.3 Variance analysis and classification 2.4 Budget and budgetary control 2.5 Marginal cost 2.6 Break even point analysis 3.2 Population and sample size 3.3 Sampling technique 3.4 Personal observation 3.5 Justification of choice Data analysis and presentation 4.1 Cost accounting department 4.2 Financial department 4.3 Production department 4.4 Output come of hypothesis Summary, findings, conclusion and recommendation Cost accounting is considered as the managerial planning and control activities furnishing management with the necessary accounting tools to plan, control and evaluate operation. The term cost accounting as however published by the institute of cost and management accountant is define as “the application of costing and cost accounting principles, methods and techniques to the science art and practice of cost control and the ascertainment of profit”. It includes the presentation of information derived for the purpose of management decision making. The basic difference between a merchandized business and a manufacture is that merchant purchases merchandise in a ready – to sell condition whereas the manufacturer produce the good it sells. In a merchandising business the cost of goods available for sale is based upon the cost of purchase in a manufacturing business on the other hand the cost of manufacturing the finished goods, as a result of this, there is the needs for every manufacturing business to be cost conscious in the course of manufacturing goods. The cost accounting system depends upon the purpose for which the management requires the information for many Purposes such as control, decision making and determination of price. 1.1 STATEMENT OF GENERAL PROBLEMS The production of items is not problem but knowing the cost involved during such production is the issue at stake. One just have to recover the investment committed in the production of an item through accounting for such cost and passing it to either the middle men or consumers Iclho pay for the items. This has created room for the existence of a production circle in most of our industries now. To account for the production of an items materials, labour and overhead must have their own share of cost which resulted to be the complete production of the production. To this end, manufacturer find it littie bit difficult to adequately account for the production cost they incurred and not being able to recover from the item produced, he refinancing power of the organization concerned is reduced in one way or the other, therefore, the producer has to be conscious of cost incurred in the cost at a certain period of time the financing of the project plan becomes adversely affected or ever not possible. 1.2 OBJECTIVE OF THE STUDY The aims and objective of this study are to evaluate the cost accounting system, which brings out the real situation and existence of an organization. It is aimed at the determining the level of adequacy of cost accounting in the organization’s production activities. Through evaluating the system, the efficiency or inefficiency of such system will come to light which will create opportunity for discussing the ways by which an organization can account for cost of production adequately. Without excluding certain cost incurred during the production. The study is aimed at providing solution to the problems of process cost accounting in industries which one way or the others do affect performance within the industry. It’s aimed at analysis the relevant of cost accounting system an industry and providing solution weaker system. 1.3 STATEMENT OF HYPOTHESIS 1. Alternative Hi – effective use of cost accounting as a means of control in manufacturing industries cannot minimize cost – Un11Ho – effective use of cost accountings as a means of control manufacturing industries cost minimize cost – UU! IHO effective use of cost accounting as a mean of control. 2. Alternative – an efficient costing system helps an undertaking to ascertain it’s cost of operation. 1.4 SIGNIFICANCE OF THE STUDY The significance of the study can be reviewed in different perspective. The study serve am importance toward the writers programme as a part of the Kaduna polytechnic requirement which enable the completion of the national diploma programme. 1.5 LIMITATION OF STUDY Limitation of study are the constrains that restricts the writer from elaborating more on the project. In other words, they are restriction encounter by the writer in course of writing his project, which are beyond his control. The study was made based upon the presence of some restrictions. It will have been more them what it is but due to the constraints, this is what the writer was able to achieve. The major obstacle is the co-operation of respondent quite a number of responses were made but not to the extent of warranting a through knowledge of the organization’s activities as a result, the study unable to meet the writer’s objective. The writer was faced with financial constraint and as a result could not go round getting more information for writing the project. This to some extent hampered the study. Further more, availability of time for the research work was also a constraint to the study. This is due to the fact that the most writers time was spend in attending lectures writing semester test and assignment. 1.6 AN OVERVIEW OF THE ORGANIZATION Northern cable processing manufacturing company limited (NOCACO) was incorporated in June with the aim of supplying vehide assembly plant with locally made cable and wires. The company’s equity share capital were distributed 40% the Nigeria share holders and 60% to the foreign Partners from Germany. The above distribution was due to the technical nature of the activities to be carried out to satisfy the schedule III of the enterprises promotion decree. The firm started with strength of 96 staff at the commencement of production, the firm now employ over 550 staffs and having the recognition as one of the most important cable manufacture in the northern Nigeria producing in accordance with national and international standards. NOCACO is correctly producing cable of about 400 different types. These cable including the house wiring cables. The insolated aluminum services cable, aluminum overhead line which is produced breed and steel re-enforce as well, the copper underground cables immured and un-armored as well as flexible cable fastening for the automobile industry and or the assemblers of air condition fridge and fretters. The firm principal factor of performances. Is the quality of its products. As a result, the company places a high premium on the quality of its products it designs and produce high quality of its products in accordance with national and international standard such as Nigeria industrial standard (NIS). International Electron – TECHNICAL COMMISSION (IEC). British standard and German Industrial standard to specific customer requirement . The firm carries out rigorous quality control test on its products at every stage of the production process rather – then at the first inspection, so as to maintain the high standard throughout the length of conductor of cable. The high quality maintenance in the organization can be proved the award worn by the company during one period or the other, the award are: Nigeria industrial standard silver award, winner of PVC insolated (non armoured) electric cable for power of lighting and also hold the ordinary award on aluminum conductor. Correctly, NOCACO has won the Nigeria best cable and wire production. The company is located in the heart of the Northern State of Nigeria Kaduna. Head office and factory is located along Marchibi road, Kakuri Industrial estate Kaduna. Can't find what you are looking for? Call (+234) 07030248044. OTHER SIMILAR ACCOUNTING PROJECTS AND MATERIALS
In the rhetorical analysis assignment, you will be expected to demonstrate an understanding that every text is created for a unique situation and audience. You will need to act on this by analyzing the decisions made by the Centers for Disease Control and Prevention (CDC) on their website about ADHD. This includes the persuasive appeals the CDC uses. Aristotle believed that speakers and writers used three kinds of persuasive appeals. Logos is an appeal to the audience’s powers of reason or logic. Pathos is an appeal to emotions or senses. Ethos is the personal appeal, charisma, or credibility of the speaker or writer. These three appeals are known as the rhetorical triangle because all three sides work together to make a text effective or not. In your own words, explain the concepts of pathos, logos, and ethos and why they are important appeals to recognize in a text. Next, use the CDC’s page about ADHD to analyze the ethos, logos, and pathos in the document. Consider the following questions in your response: - What do you think about the logic presented in the document? - What do you think about its emotional appeal - What about its appeal to authority or credibility? - Does the CDC effectively present each of the rhetorical appeals? Why or why not? Link to Centers of Disease Control’s website: http://www.cdc.gov/ncbddd/adhd/facts.html Education homework help
Over the last eight months at Ucross High Plains Stewardship Initiative, our team developed a set of tools to assist in monitoring areas surrounding beaver dam installations in eastern Montana using remote sensing. One of these tools is focused on calculating normalized difference vegetation index (NDVI) across an area of interest and how it is changing over time. The purpose of this blog post will be to help explain what exactly NDVI is and how it has developed over time. When satellites capture spectral data about the surface of the Earth, they are measuring light that has been reflected by the atmosphere, clouds, water, vegetation, etc. However, the light that is reflected is not equal. The light varies in wavelength (the distance between the top of each wave) because the sun emits light at different wavelengths due to difference in temperature and density. Each satellite has a variety of sensors that allow for the capture of this reflected light at different wavelengths. It is this difference in wavelengths that give satellites their different band information. These bands cover the visible light spectrum that we as people can see, but it also measures wavelengths beyond that, including into the infrared and thermal realms. For example, the bands of the Sentinel-2 satellite are ultra blue, blue, green, red, near infrared 1-4, and shortwave infrared 1-4 (see figure 1). Every molecule in the universe has a distinct spectral profile, the proportion of light that is reflected at each wavelength. Chlorophyll, the photosynthetic machinery of most vegetation and the dominant component of green vegetation, has a spectral profile with a distinct peak. The peak is centered on the red-near infrared wavelengths. Normalized Difference Vegetation Index is simply a way to exploit the distinct spectral profile of chlorophyll to measure green vegetation coverage (see figure 2). The formula considers the red and near-infrared bands of satellite sensors (see figure 3) and allows for easy comparison of green vegetation coverage across landscapes. Including the difference in the calculation allows for NDVI to be compared across all landscapes. Now we know how NDVI is calculated and what it means, but how did we arrive at this measure and how useful is it really? In July of 1972, NASA launched a satellite called the Earth Resources Technology Satellite, eventually renamed Landsat 1, which was the beginning of the famous Landsat remote sensing project. As part of the project, NASA provided funding for a team of researchers in the Great Plains of North America to investigate how this new technology could be used to advance our understanding of vegetation cover and health. Leveraging the distinct spectral profile of chlorophyll, this group of researchers realized that the red and near-infrared wavelengths would allow them to capture greenness of vegetation, and NDVI was born. Following the research produced by this group, near infrared and red became standard sensors on future satellite remote sensing products and NDVI became widely used. In the 1990s, a team of researchers published research that found strong correlations between aboveground net primary production and NDVI3 (figure 4) as well as between greenness and NDVI4. With these publications and the many others surrounding NDVI, it has been well accepted that NDVI can provide valuable information at the local to global scale on vegetation changes. When we developed our tool for monitoring areas around BDAs in eastern Montana, we had the luxury of decades of work that came before us. NDVI is well studied, well understood, and well accepted by the scientific community. We were able to use this knowledge to leverage the incredible library of satellite data in making a tool that could be used to understand a niche topic at a local scale. In my opinion, that is the best part of remote sensing. Data and research from all scales combine to create incredible insights at all levels and on all topics. 1. Maathuis, B. & Retsios, B. ITC SENTINEL EO4SD TOOLBOX V2 INSTALLATION, CONFIGURATION AND USER GUIDE OF THE SENTINEL EO4SD Toolbox Version 2. (2020). 2. Moroni, M., Porti, M. & Piro, P. Design of a Remote-Controlled Platform for Green Roof Plants Monitoring via Hyperspectral Sensors. Water 11, 1368 (2019). 3. Paruelo, J. M., Epstein, H. E., Lauenroth, W. K. & Burke, I. C. ANPP Estimates from NDVI for the Central Grassland Region of the United States. Ecology 78, 953–958 (1997). 4. Paruelo, JoséM. & Tomasel, F. Prediction of functional characteristics of ecosystems: a comparison of artificial neural networks and regression models. Ecological Modelling 98, 173–186 (1997).
Fig 1. On the relief panel of this temple of the Gupta period, Vishnu is represented during his cosmic sleep on the coils of the seven- headed Naga. His consort, Lakshmi, is at his feet. Fig 2. The greatest temple foundation by the Pallava dynasty of Kanchi, southern India, is the complex of Mamallapuram or Mahabalipuram, popularly known as the Seven Pagodas, south of Madras. The complex, built in 625–674, comprises a number of caves, a group of beautiful monolithic structures (the so-called rathas) and this splendid Shore Temple, dedicated to Shiva. Fig 3. The numerous gold coins of the Guptas (the Bayana hoard alone contains 1,021 specimens) are an important source for the history of the period. Their distribution gives an idea of the areas controlled by various Gupta kings and the frequency of the minting reflects economic activity. The representations show how the Gupta kings wished the world to see them. This king appears as a fearless hunter slaughtering a lion with a bow and arrow. Fig 4. An older Shiva temple at Orissa shows the typical shape (shikhara) of a southern India temple. It is built on a platform of a pool that is used for ritual ablutions. Fig 5. Although the Gupta Empire (A) represented the peak of classical Indian culture, its influence hardly reaches the south. After its fall, the north was rarely United, while the Pallava and Chola dynasties brought continuity to the south. Despite the lack of cohesion, the north easily resisted the Muslims and lost only Sind until the growth of Mahmud of Ghazni's empire (B) as a powerful and aggressive neighbor. Mahmud was not impelled by religious motives but Muslim raids continued until the establishment of the Muslim Sultanate of Delhi in 1206, marking the start of permanent Muslim influence in India. Fig 6. Puri is one of the great religious centers, attracting countless pilgrims from all over India during the annual cart festival of Jagannath (Juggernaut). The god used to be a tribal deity. Fig 7. Although they were meant for monks, there is little trace of Puritanism in the Buddhist caves of the western Deccan. This relief, probably of the sixth century, depicts Taras, a kind of female saviour, who is performing a devotional dance. After centuries of political fragmentation and foreign domination northern India was once more united under the Gupta dynasty (c. AD 32–-550), India's classical age. In southern India another great state gradually took shape under the Pallavas. |Frescoes depicting beautiful maidens are painted on the side of a huge rock at Sigiriya, Sri Lanka, where a fortified royal residence was built in the fifth century.| The Gupta dynasty and its empire The Gupta kings, especially Samudra Gupta (reigned 330–375), Chandra Gupta II (reigned 375–415), and Kumara Gupta (reigned 415–455), founded and maintained, both by conquest and diplomacy, a great empire controlling nearly all of northern India. Good communications, security, and relative prosperity created an atmosphere in which Indian culture attained unequalled heights. Thus the works of the poet Kalidasa (flourished fifth century AD) achieved such a degree of perfection that they were often imitated but never surpassed. In art and architecture, too, Indian genius revealed itself in its most accomplished form of refinement and symbolism, but without the overemphasis of detail that typifies much Indian art after about the seventh century. The material prosperity of India in this period is emphasized in the accounts of a Chinese Buddhist pilgrim, Fa-hsien (flourished 399–414), who visited India in the fifth century, and by the discovery of many gold coins of the Gupta Empire (5). At the beginning of the sixth century the Huns invaded India from the northwest and penetrated as far as central India. This invasion has often been described as the main cause of the downfall of the Guptas, but it can be argued that the Huns would never have succeeded if the Gupta Empire had not declined owing to internal factors. The expulsion of the Huns from India Although the Huns were expelled after 30 years, northern India became divided between rival powers in Surashtra, Uttar Pradesh, and Bengal. There were important changes too in southern India in the present states of Madras and Kerala. A prosperous and cultured society, as reflected in classical Tamil literature, flourished in this area at least from the beginning of the Christian era. In the fourth century AD, the Pallavas made Kanchi (Conjeevaram) the center of a large kingdom. Although much smaller than the Gupta Empire in the north is was still of great importance. The Pallavas established a successful form of power-sharing between central and local government, which promoted political stability. The east coast of southern India remained under Pallava control until about 880, and from then until 1200 under that of the Cholas. |One of the most striking forms of the gods Shiva is that of the four-armed Nataraja, dancing on top of a demon and surrounded by a halo with flames destroying the world at the end of an aeon. This is one of the finest bronzes of the Chola period (eleventh century).| The Pallavas patronized the Brahmins who, in their turn, provided excellent educational facilities. In art and architecture a particular Dravidian style (named after the language spoken in central and southern India), culminating in the monolithic sanctuaries and rock reliefs of Mamallapuram (the "Seven Pagodas"), was developed (Fig 2). The Pallavas contributed more than any other Indians to the expansion of Indian civilization into Southeast Asia. The influence of Harsha of Kanauj Most of northern India was temporarily united by Harsha of Kanauj (606–647) whose career, admirably described by a Sanskrit writer (Bana) and a Chinese pilgrim (Hsuan Tsang, in 630–643), reflects high standards of government and reasonable prosperity. After the time of Harsha, northern India showed progressive political fragmentation with larger states tending to split into smaller units which at first paid homage to the central authority but gradually became independent. Harsha's capital, Kanauj, was made the capital of the Pratihara dynasty in 750. The latter ruled paramount over the present states of Uttar Pradesh, Punjab, and Rajasthan, but before the end of the ninth century their effective authority was limited to parts of the Punjab and Uttar Pradesh while different Rajput dynasties, originally of tribal descent, ruled in Rajasthan. Bihar and Bengal were under the Buddhist Pala dynasty (c. 750–1150) but from the tenth century they shared with minor dynasties. During such divisions the Muslim Mahmud of Ghazni (Afghanistan) (971–1030) invaded and plundered northern India many times between 1000 and 1026 (7). These were destructive raids, carried out mainly for booty. Although many Indian armies fought bravely, their resistance proved ineffective through internal rivalries and military miscalculations, such as over-reliance on elephants. Further political, but not cultural, decline led to new Muslim invasions and by the end of the twelfth century most of northern India had come under the control of the Muslim sultanate of Delhi. Sanskrit literature of the post-Harsha period offers many excellent works, although few of the quality of the earlier periods. The most important historical text of ancient India, the Kashmir Chronicle, belongs to the twelfth century. In art and architecture some of the greatest achievements, such as the temples of Orissa and Khajuraho, belong to this late period. There was no decline in southern India where the Cholas established one of the greatest Indian empires. Their kings invaded Sri Lanka and Bengal and even undertook a great maritime expedition to Southeast Asia. While northern India suffered political fragmentation and Muslim invasions, the Chola kingdom established conditions in which Hinduism flourished. |Vishnu, one of the principal gods of Hinduism and the supreme deity for the Vaishnavas, some of whom find a close analogy between religious experience and sexual love, has revealed himself as a saviour of mankind in many different forms, in particular in ten descents (avataras) as a man or as an animal. His most celebrated avatara was as Krishna, the divine shepherd and king-philosopher. Of the animal avataras, the Boar (varaha), shown here with elaborate ornamentation, is most frequently represented. In Hindu mythology the god is believed to have descended in this form to rescue the earth, which had sunk in the ocean.|
In simple terms, encoding is the process of compressing and changing the raw file into a digital file making it compatible for playback in different devices and platforms. The encoded format of a video file uses less space as compared to raw format and when it is played back, it is played as an approximation of the original content. Video encoding was brought into life with a mission to help technological advancement in the video industry in future. It has made it possible to stream video over the internet, both in real-time and on-demand. In this article, we will explore the basics of video encoding, including the various technologies and codecs used in the process. We will also examine how video encoding works. Video encoding – What does it mean? Video encoding is the method of converting raw video files to digital files ensuring that they are not saved as individual images but as fluid videos. It is also the process of compressing the video file to reduce its size for storage or transmission. A digital video is encoded to meet proper formats as well as specifications for recording as well as playback using video encoder software. Video encoding typically involves the use of software or hardware codecs that compress the video data by removing redundant information and reducing the overall file size. The process of encoding may also involve changing the resolution, frame rate, bit rate, and color space of the video to optimize it for a particular use case or device. Video encoding plays a vital role in the delivery of high-quality video content to users worldwide. It ensures efficient video content transmission over networks with limited bandwidth, enabling video content to be streamed in real-time, even on mobile devices with slower connections. What is Video decoding? Video decoding is the method of converting compressed video data back into an uncompressed format that can be played on a device or displayed on a screen. This process is the opposite of video encoding, which compresses the original video data to make it smaller for storage or transmission. It is a vital step in delivering high-quality video content to viewers around the world. What is video compression? Video compression is the method of using encoding to reduce the size of a digital video file. It reduces the size of a digital video file by removing redundant or non-essential information without compromising the quality of the video. When two frames are nearly identical, you can eliminate the data for one frame and substitute it with a reference to the preceding frame. By doing this, you can decrease the size of your video file by approximately 50 percent in this basic scenario. How does video encoding work? Video encoding works by compressing a digital video file into a smaller size using software or hardware codecs. Here are the typical steps involved in the video encoding process: Preprocessing: In this stage, the video is analyzed to determine its characteristics, such as resolution, frame rate, and color space. The video is also divided into small segments called macroblocks for further processing. Transform: This stage involves converting the video from its spatial domain to its frequency domain, using techniques like Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT). This step helps to reduce redundancy in the video data. Quantization: In this stage, the transformed video data is reduced to a lower precision level, resulting in a loss of information. The goal is to remove information that is not perceptible to the human eye, further reducing the file size. Entropy Encoding: In this stage, the video data is compressed further using entropy encoding techniques like Huffman coding or Arithmetic coding. This step eradicates any remaining redundancies in the video data. Bitstream Formatting: In the final stage, the compressed video data is organized into a bitstream, which includes header information and compressed video frames. The bitstream is then ready for storage or transmission over a network. Take a look at the glimpse of Muvi One’s encoding profiles customization feature What are Encoding Formats? Encoding formats are specific file formats used for storing and transmitting compressed digital video data. Here are some of the most common encoding formats: Different types of encoding formats - MP4 (MPEG-4 Part 14): This is a widely-used format for storing video and audio data. It is compatible with most devices and platforms, and supports high-quality video compression techniques like H.264 and H.265. - FLV (Flash Video): This format was originally designed for use with Adobe Flash Player, but it is now used for online video streaming and delivery over the internet. - MOV (QuickTime File Format): This format was created by Apple and is used primarily for storing video and audio data on Mac computers. It supports a wide range of video and audio codecs. - MKV (Matroska Video): This is an open-source format that supports high-quality video compression techniques like H.264 and H.265. It is commonly used for storing high-definition video and audio data. - LXF (Sony Professional Disc): This format is used by Sony for storing high-definition video data on its professional disc media. - MXF (Material Exchange Format): This format is used in the broadcast industry for exchanging digital video data between different systems and devices. - AVI (Audio Video Interleave): This is an older format developed by Microsoft that is still used for storing and playing video and audio data. It supports a wide range of codecs, but is less efficient than more modern formats. - WebM: This is an open-source format developed by Google that is optimized for online video streaming and delivery. It uses the VP8 and VP9 codecs for video compression. - QuickTime: This is a format developed by Apple that supports a wide range of codecs and is used primarily for storing video and audio data on Mac computers. To know more read our blog – Top Video Encoding Formats 2023 What are video codecs? Codecs are the tools used to compress video files and play back or in other words, a codec is a video encoder that encodes and decodes a digital file. Video codecs usually compress raw video and audio files between analog and digital formats and make them smaller. The word “codec” is a combination of “coder-decoder,” which refers to the two processes involved in video compression and decompression. Different types of video codecs H.264, also known as MPEG-4 Part 10 or AVC (Advanced Video Coding), is a popular video compression standard that is widely used for video encoding, streaming, and playback. It was developed by the Joint Video Team (JVT) of the International Telecommunication Union (ITU) and the International Organization for Standardization (ISO) in 2003. H.264 uses a technique called “block-based motion compensation” to reduce redundancy in video frames. This involves dividing each frame into small blocks and encoding each block separately using predictive coding, which means that the current block is predicted based on the previous blocks. The prediction error is then encoded and transmitted along with the prediction data, resulting in high compression efficiency. HEVC (High Efficiency Video Coding), also known as H.265, is a video compression standard that was developed as a successor to H.264. It was jointly developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) and was first released in 2013. HEVC/H.265 uses many of the same techniques as H.264, but with improved efficiency. One of the key improvements is the use of larger blocks for motion compensation, which allows for more efficient encoding of high-resolution video. It also includes new coding tools such as intra prediction and sample adaptive offset, which improve the quality of compressed video. HEVC/H.265 is now widely used in many applications, including digital television, video conferencing, video surveillance, and streaming video services. However, due to the higher complexity of the encoding and decoding process, HEVC/H.265 requires more processing power than H.264, which may be a consideration for some applications. VVC (Versatile Video Coding), also known as H.266, is a video compression standard that was released in 2020 as a successor to HEVC/H.265. It was developed by the Joint Video Experts Team (JVET) of the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG). VVC/H.266 includes many of the same features as HEVC/H.265, such as block-based motion compensation and intra prediction. However, it also introduces several new techniques to improve compression efficiency, including advanced intra prediction modes, improved motion compensation, and more efficient entropy coding. One of the key advantages of VVC/H.266 is its support for dynamic resolution scaling, which allows for efficient encoding of video content that includes varying levels of detail. It also includes support for 12-bit color depth and HDR (high dynamic range) video, which can improve the visual quality of compressed video. VP9 is a video compression format developed by Google in 2013 as an open and royalty-free alternative to H.264 and HEVC/H.265. It is part of the WebM project, which aims to provide an open and standardized video format for use on the web. VP9 uses many of the same techniques as H.264 and HEVC/H.265, such as block-based motion compensation and intra prediction. However, it also introduces several new techniques to improve compression efficiency, such as variable block sizes, improved motion compensation, and more efficient entropy coding. VP9 is supported by a wide range of devices and software, including Google’s Chrome and YouTube, as well as other web browsers and video players. It is also used in some streaming services, such as Netflix and Amazon Prime Video. AV1 is a video compression format that was developed by the Alliance for Open Media (AOMedia), a consortium of technology companies that includes Google, Mozilla, Amazon, and others. It was first released in 2018 as an open, royalty-free alternative to existing video compression formats such as H.264, HEVC/H.265, and VP9. AV1 uses advanced techniques such as block-based motion compensation, intra prediction, and entropy coding to achieve high compression efficiency. It also includes several new features such as advanced motion vector prediction and in-loop filtering to further improve compression efficiency. AV1 can achieve significantly higher compression efficiency than previous video compression formats, which means that it can deliver high-quality video at lower bitrates or higher-quality video at the same bitrate. This makes it particularly well-suited for streaming high-resolution video content over the internet or other low-bandwidth networks. MPEG-2 is a video compression standard that was developed by the Moving Picture Experts Group (MPEG) in the 1990s. It is a widely used format for digital video broadcasting and DVD-Video discs. MPEG-2 uses a combination of intra-frame and inter-frame compression techniques to achieve high compression efficiency. In intra-frame compression, each frame is compressed independently of the other frames in the sequence, while in inter-frame compression, only the differences between frames are encoded. MPEG-2 has been widely adopted in the broadcast industry and is used for a wide range of applications, including digital television, satellite broadcasting, and video production. It is also used in some consumer electronics devices, such as DVD players and digital set-top boxes. MPEG-4 is a video compression standard developed by the Moving Picture Experts Group (MPEG) in the late 1990s. It is designed to be a versatile and flexible format that can support a wide range of applications and devices. MPEG-4 uses a combination of intra-frame and inter-frame compression techniques to achieve high compression efficiency. It also includes a wide range of advanced features, such as object-oriented coding, 3D graphics, and support for different media types, including audio, video, and text. It is widely adopted in the multimedia industry and is used for a wide range of applications, including video production, video conferencing, and mobile video streaming. It is also used in some consumer electronics devices, such as smartphones and portable media players. VC-1 is a video compression standard that was developed by Microsoft as part of its Windows Media Video technology. It was first released in 2006 as a successor to the earlier Windows Media Video 9 format. VC-1 uses a combination of intra-frame and inter-frame compression techniques to achieve high compression efficiency. It also includes several advanced features, such as support for multiple video resolutions and frame rates, as well as advanced motion estimation and compensation techniques. DNxHD is a video compression format developed by Avid Technology that is used primarily in the post-production and broadcast industries. It is a high-quality, high-bitrate codec that is designed to preserve the quality of video content while minimizing file size. DNxHD uses intra-frame compression techniques to achieve high compression efficiency. Each frame is compressed independently of the other frames in the sequence, which allows for fast random access and editing of the video content without having to decode the entire video stream. DNxHD supports a wide range of video resolutions and frame rates, including standard definition (SD) and high definition (HD) video. It is used in a variety of applications, including video editing, broadcast television, and digital cinema. ProRes is a video compression format developed by Apple Inc. that is widely used in the post-production and broadcast industries. It is a high-quality, high-bitrate codec that is designed to preserve the quality of video content while minimizing file size. ProRes uses intra-frame compression techniques to achieve high compression efficiency. Each frame is compressed independently of the other frames in the sequence, which allows for fast random access and editing of the video content without having to decode the entire video stream. ProRes supports a wide range of video resolutions and frame rates, including standard definition (SD) and high definition (HD) video. It is used in a variety of applications, including video editing, broadcast television, and digital cinema. Read our blog to know more about Cloud Video Encoding Vs On-Premise: Pros & Cons Video encoding Vs Transcoding – The Difference Video encoding and transcoding are two related but distinct processes in video processing. It is the process of compressing a video file by converting it into a digital format that requires less storage space while maintaining the visual quality of the original file. It involves compressing and encoding the video stream using a specific codec (e.g., H.264, HEVC/H.265, etc.) to reduce the file size while retaining as much visual quality as possible. The resulting compressed file is usually referred to as an encoded file or an encoded stream. It is the process of converting a video file from one digital format to another. It involves converting a video from one codec to another (e.g., from H.264 to HEVC/H.265) or changing the container format (e.g., from AVI to MP4). Transcoding can also involve changing other parameters such as video resolution, frame rate, bit rate, and color space. The main difference between encoding and transcoding is that encoding is focused on compressing the video stream to reduce its size, while transcoding is focused on converting the video file from one format to another. Encoding is usually done once, at the time of content creation, while transcoding may be done multiple times during the video production and distribution process to optimize the file for different devices and platforms. Applications of Video Encoding Video encoding has numerous applications across various industries and domains. Here are some of the most common applications of video encoding: Video encoding is a vital process for streaming video content over the internet. By compressing the video data using a specific codec, the video content can be transmitted over the internet in a method that minimizes bandwidth usage and buffering, while maintaining high-quality video playback. DVD and Blu-ray disc authoring: Video encoding is an important process for creating DVDs and Blu-ray discs, which require high-quality video content to be compressed and authored onto the discs. Video encoding is used to compress the video content and create a digital master that can be used to author the disc. Video encoding is used in digital TV broadcasting to compress and transmit high-quality video content over the airwaves. By using a specific codec, broadcasters can transmit high-quality video content that is optimized for the digital TV format, while minimizing bandwidth usage. Video encoding is used in video conferencing systems to transmit high-quality video content between remote users. By using a specific codec, the video content can be compressed and transmitted in real-time, allowing for high-quality video conferencing even over low-bandwidth connections. Video editing and production: Video encoding is used in the video editing and production process to create high-quality video content that can be edited and manipulated in a variety of ways. By compressing the video data using a specific codec, video editors can work with large video files in a more efficient and streamlined manner. Why would I use an encoding service? An encoding service can be useful for a variety of reasons: Encoding can reduce the size of large files, making them easier to store and transfer and thus useful for video and audio files, which can be quite large. Different devices and software may have different encoding requirements. An encoding service can help ensure that your files are compatible with the devices and software you’re using. Encoding can help standardize file formats, making it easier to share and exchange files with others. Encoding can optimize files for specific purposes, such as streaming video or playing back audio thus improving the quality of the media and making it more enjoyable to consume. How can Muvi One help you in the Video Encoding process? Muvi One’s in-built encoding and transcoding feature helps deliver a seamless buffer-free streaming experience to your viewers with more efficiency. By encoding your videos, compress the file size without compromising the quality. It helps you transcode the encoded video into multiple resolution formats for buffer-free streaming, regardless of internet speed and device used. Encoding Engine workflow In our current encoding architecture, we are so flexible to pick data from customers at any steps mentioned in the above diagram and process further to provide a seamless streaming experience. Couple of recent examples where our encoding engine is partially used are below - We are currently addressing MGM use cases starting from step 3, Where MGM’s encoding partner OD Media provides us the raw ABR files, we process further starting from wrapping DRM package to streaming out via CDN (CloudFront). - We have one more use case of SimplySouth where customers manage the entire encoding process & storage. They only provide us the CDN output. We process further to streamout those contents. A more detailed process involved in the Encoding is illustrated below. 1. Raw Content Encoding Req: Encoding engine starts its work with the raw content processing request. It can be initiated from our Muvi CMS or any 3rd party sources. 2. Analyze the file & ABR Conversion: - Encoding engine processes the downloaded file to verify all the meta information and if possible, correct any missing information. - Encode the file to multiple resolutions depending on the configuration. - The output can be DASH, HLS or Mp4 depending on the provided configuration. If the content encoding is of DRM encoding then fragmentation is a mandatory part. 4. DRM Packaging: Now content being wrapped with a DRM package with encryption key & content key. 5. Serve through The only best way to deliver the content is CDN and now our encoded content is ready to deliver via CDN. Video encoding plays a significant role in the creation, distribution, and consumption of digital media. Content creators, distributors, and viewers can ensure that their videos are of high quality, compatible with different devices and software, and accessible to a wider audience. Muvi One offers an integrated encoding feature that automatically converts your video content into a streamable digital format and compresses it for optimal delivery. With Muvi, you can simply upload your video files and prepare to stream them seamlessly across a range of devices and platforms. There’s no need for an external encoder as Muvi’s built-in encoding capability takes care of the entire process. Take a free trial to explore all outstanding features of Muvi One! Why is video encoding important? Raw video files can take up a lot of storage space, making it difficult to store and transfer them. By compressing the video through encoding, the file size is reduced, allowing for more efficient use of storage space. Video files that are not compressed can take a long time to transfer over networks, especially when dealing with large files. Encoding reduces the size of the video file, making it easier to transmit over networks, resulting in faster streaming or downloads. What is video encoding? Video encoding is the process of compressing and converting a raw video file into a digital format that can be easily stored, transmitted, and played back on various devices. Raw video files typically contain a large amount of data, and encoding compresses this data to reduce the file size. What are the different video encoding formats? MP4 (MPEG-4 Part 14) FLV (Flash Video) MOV (QuickTime File Format) MKV (Matroska Video) LXF (Sony Professional Disc) MXF (Material Exchange Format) AVI (Audio Video Interleave) How does video encoding work? Video encoding works by compressing a digital video file into a smaller size using software or hardware codecs. Here are the typical steps involved in the video encoding process: What factors affect video encoding quality? Several factors can affect video encoding quality, including: - Compression settings - Source material - Encoding time
The nervous system of the abdomen, lower back, and pelvis contains many important nerve conduits that service this region of the body as well as the lower limbs. This section of the nervous system features the most inferior portion of the spinal cord along with many major nerves, plexuses, and ganglia that serve the vital organs of the abdominopelvic cavity. As the spinal cord descends through the vertebral canal of the lower back, it tapers to a point known as the conus medullaris around the L2 vertebra. The actual point of tapering is dependent on an individual’s age and height as the spinal cord completes its longitudinal growth during infancy, long before the growth spurts of childhood and adolescence. From the conus medullaris many individual spinal nerves descend through the vertebral canal of the lumbar and sacral spine in a structure known as the cauda equina, which is Latin for “horse’s tail.” Each pair of spinal nerves in the cauda equina exits the vertebral canal at foramen between the vertebrae of the lumbar spine or in the sacrum. Upon exiting the vertebral canal, the spinal nerves of the lower back form into two networks known as the lumbar and sacral plexuses. The lumbar plexus supplies nerves to the skin and muscles of the lateral abdominal region, thigh, anterior thigh, and external genitals. The sacral plexus similarly supplies nerves to the skin and muscles of the posterior thigh, leg, and foot. The sciatic nerve, the largest and longest nerve in the human body, carries a major portion of the nerve signals from the sacral plexus into the leg before separating into many smaller branches. The spinal nerves of the lower back also carry many neurons of the autonomic nervous system (ANS) that maintain the vital involuntary processes of the digestive, urinary, endocrine, and reproductive systems. Neurons from the sympathetic division of the ANS extend from the spinal cord to form nerves that meet at several autonomic ganglia in the abdomen. Each autonomic ganglion, such as the celiac ganglion, forms a plexus of nerve fibers that extend to the organs of the abdomen and pelvis to control their function. The parasympathetic division of the ANS is also represented in the abdomen and pelvis through the vagus nerve and the sacral nerves. The vagus nerve is a cranial nerve that wanders from the base of the brain parallel to the spinal cord to stimulate digestion in the liver, stomach, and intestines. Parasympathetic neurons in the spinal cord pass through the sacral nerves in the lower back to reach the pelvic organs such as the bladder and reproductive organs to control their functions. Between the opposing functions of the sympathetic and parasympathetic divisions of the ANS, the nervous system is effectively able to control all of the organs of the abdomen and pelvis. A frequently overlooked portion of the nervous system is the enteric nervous system (ENS) found in the gastrointestinal tract. The ENS is a network of around 100 million neurons that regulate the functions of the digestive tract. The ENS monitors the contents of the gastrointestinal tract; decides how to digest its contents most effectively; and controls the movements of smooth muscles and the secretion of glands that results in the digestion of food to provide nutrients to the body. Although the brain is able to monitor and control the ENS through autonomic neurons, the ENS often works autonomously and can even function after its nervous connections to the brain have been destroyed.
Operations and Algebraic Thinking - Add and subtract within 20. - Represent and solve problems involving addition and subtraction. - Understand and apply properties of operations and the relationship between addition and subtraction. - Work with addition and subtraction equations. Number and Operations in Base Ten - Extend the counting sequence. - Understand place value. - Use place value understanding and properties of operations to add and subtract Measurement and Data - Measure lengths indirectly and by iterating length units. Sequence of Grade 1 Modules Aligned with the Standards Module 1: Sums and Differences to 10 Module 2: Introduction to Place Value Through Addition and Subtraction Within 20 Module 3: Ordering and Comparing Length Measurements as Numbers Module 4: Place Value, Comparison, Addition and Subtraction to 40 Module 5: Identifying, Composing, and Partitioning Shapes Module 6: Place Value, Comparison, Addition and Subtraction to 100
Regulation of Gene Expression All the genes are not activated constantly. The genes are needed only when proteins are needed. These are thus called regulatory genes and are made to function only when required and remain non-functional at other times. Such regulated genes, therefore, are required to be switched ‘on’or ‘off’ when a particular function is to begin or stop. These genes form an ‘operon’. Lac Operon - An operon consists of structural genes, operator genes, promoter genes, regulator genes, and repressor. - Lac operon consists of Lac Z, Lac Y, and Lac A genes as structural genes. These genes code for specific enzymes. Lac Z codes for galactosidase, Lac Y codes for permease and Lac A codes for transacetylase. When repressor molecules bind the operator, the transcription process is inhibited. - When the repressor does not bind the operator and instead inducer binds, transcription is switched on. In the case of lac operon, lactose is an inducer. So, binding of the lactose to the repressor, switches on the transcription. - In Transcription, with the help of RNA polymerase enzymes, the messenger RNA is produced. The main function of mRNA is to facilitate the synthesis of a protein.
Ultrasound, also known as ultrasonography, is a medical imaging technique that uses high-frequency sound waves to create images of the inside of the body. It is a non-invasive and safe method for visualizing various organs and tissues in the human body. - Ultrasound, or ultrasonography, is a medical imaging technique that uses high-frequency sound waves. - It is non-invasive and safe, involving no ionizing radiation, making it suitable for various patient populations, including pregnant women and children. - Ultrasound works by sending sound waves into the body, which bounce off internal structures and create echoes. - These echoes are captured by a transducer, which also emits the sound waves, and are then processed to generate real-time images. - Transducers are placed on the skin, and a gel is often used to facilitate sound wave transmission. - 2D ultrasound provides two-dimensional cross-sectional images in real time, while 3D ultrasound produces three-dimensional images. - 4D ultrasound, or real-time 3D ultrasound, provides dynamic three-dimensional images, useful in obstetrics for viewing fetal movements. - Doppler ultrasound measures the Doppler shift in reflected sound waves to assess blood flow velocity and direction. - Color Doppler and Power Doppler ultrasound use color to represent blood flow direction and speed, aiding in vascular and cardiac imaging. - Contrast-enhanced ultrasound uses contrast agents to improve the visualization of blood flow and lesion characterization. - Ultrasound is versatile and used in various medical specialties, including obstetrics, cardiology, radiology, and more. - It plays a critical role in prenatal care, monitoring fetal development and health during pregnancy. - In obstetrics, it can be used for gender determination, assessing the placenta, and identifying birth defects. - Echocardiography is a vital application of ultrasound, enabling the assessment of the heart’s structure and function. - Musculoskeletal ultrasound helps diagnose and monitor conditions related to muscles, tendons, ligaments, and joints. - Vascular ultrasound assesses blood flow in arteries and veins, aiding in the diagnosis of vascular diseases. - Ultrasound can complement mammography in breast cancer screening and evaluation of breast abnormalities. - It is used to evaluate thyroid nodules and masses in the neck region. - In emergencies, ultrasound can be used to quickly assess internal injuries or fluid accumulation in the body. - Ultrasound images are operator-dependent, and the quality of the images can vary based on the operator’s skill. - Limited penetration through bone or air-filled structures can be a limitation of ultrasound. - Tissue characterization, such as distinguishing between benign and malignant masses, can be challenging with ultrasound alone. - Advances in technology have led to the development of portable and handheld ultrasound devices. - 3D and 4D ultrasound have improved visualization and diagnostic capabilities in various medical fields. - Ultrasound continues to evolve with ongoing research, offering promising future applications and improvements in imaging quality and accuracy. Defination of Ultrasound: Ultrasound is a medical imaging technique that uses high-frequency sound waves to create real-time images of the inside of the body. Purpose of Ultrasound: - Diagnostic Imaging: Ultrasound is primarily used for diagnostic purposes to visualize and assess the internal structures of the body. It provides valuable information about the size, shape, location, and condition of organs, tissues, and blood vessels. Common diagnostic applications include: - Obstetric ultrasound to monitor fetal development during pregnancy. - Abdominal ultrasound to assess the liver, gallbladder, pancreas, kidneys, and other abdominal organs. - Cardiac ultrasound (echocardiography) to evaluate the heart’s structure and function. - Musculoskeletal ultrasound to examine muscles, tendons, ligaments, and joints. - Vascular ultrasound to assess blood flow in arteries and veins. - Breast ultrasound to complement mammography in breast cancer screening and evaluation. - Thyroid and neck ultrasound for evaluating thyroid nodules and neck masses. - Guidance for Medical Procedures: Ultrasound is frequently used to guide medical procedures, making them more accurate and less invasive. It helps healthcare providers visualize the target area in real time during procedures such as: - Biopsies: Ultrasound-guided biopsies help in obtaining tissue samples from suspicious masses or lesions. - Aspirations: It aids in draining fluid collections or cysts. - Injections: Ultrasound guidance is used for precise delivery of medication or pain relief injections. - Needle placement: It assists in accurate placement of needles for various medical interventions. - Monitoring and Assessment: Ultrasound is employed to monitor ongoing medical conditions and assess treatment effectiveness. For example: - In obstetrics, it monitors fetal growth, development, and well-being during pregnancy. - In cardiology, it tracks changes in cardiac function and blood flow. - In vascular medicine, it evaluates the progression of vascular diseases and the outcomes of interventions. - Dynamic Visualization: Ultrasound provides real-time, dynamic imaging, making it ideal for observing moving structures and processes, such as: - Cardiac ultrasound to visualize the beating heart. - Fetal ultrasound to observe fetal movements and heart rate. - Musculoskeletal ultrasound to assess joint mobility and tendon movements. - Evaluation of blood flow in arteries and veins. - Therapeutic Applications: In some cases, ultrasound is used therapeutically for purposes like: - High-intensity focused ultrasound (HIFU) for targeted tissue ablation, including the treatment of certain tumors. - Ultrasound-assisted drug delivery, where ultrasound waves enhance the penetration of drugs into tissues. - Early Experiments (Late 19th and Early 20th Century): Scientists like Pierre and Jacques Curie made early discoveries related to ultrasound and piezoelectricity, laying the groundwork for ultrasound technology. - WWII Sonar Development (1930s-1940s): The development of sonar during World War II led to advances in ultrasound technology, as sonar used similar principles of sound wave propagation. - First Medical Ultrasound Experiments (1940s-1950s): Medical researchers began experimenting with ultrasound for diagnostic purposes, initially focusing on the detection of gallstones. - A-mode and B-mode Scanning (1950s): A-mode (amplitude mode) and B-mode (brightness mode) ultrasound scanning techniques were introduced, allowing for the visualization of organs and tissues. - Continued Innovation (1960s-1970s): The 1960s saw significant advancements, including the development of real-time ultrasound imaging, which enabled dynamic visualization of the human body. - Obstetrical Applications (1960s-1970s): Ultrasound became widely used in obstetrics for monitoring fetal development, and the first commercial ultrasound machines were introduced. - Color Doppler Ultrasound (1970s): The 1970s saw the introduction of color Doppler ultrasound, a technique for visualizing blood flow within the body. - 3D Ultrasound (1980s): Three-dimensional ultrasound imaging was developed, providing enhanced visualization of structures, particularly in obstetrics. - 4D Ultrasound (1990s): Real-time 3D ultrasound, often referred to as 4D ultrasound, allowed for the observation of moving structures within the body. - Advancements in Transducer Technology (2000s): Ongoing technological improvements led to the development of more sophisticated transducers and higher-resolution imaging. - Portable and Handheld Ultrasound Devices (2010s): The 2010s saw the emergence of portable and handheld ultrasound devices, expanding the accessibility of ultrasound imaging. - Ongoing Research and Innovation (Present): Ultrasound technology continues to evolve, with ongoing research focusing on improving image quality, diagnostic accuracy, and the development of new applications. Ultrasound Imaging Modalities: 2D Ultrasound: This is the most common and basic form of ultrasound imaging, providing two-dimensional cross-sectional images of internal structures in real time. 3D Ultrasound: 3D ultrasound provides three-dimensional images of structures within the body. It is particularly valuable in obstetrics for visualizing fetal development and in various medical specialties for enhanced anatomical visualization. 4D Ultrasound: Also known as real-time 3D ultrasound, 4D ultrasound provides dynamic three-dimensional images, showing moving structures in real time. It is often used in obstetrics to capture fetal movements and expressions. Doppler Ultrasound: Doppler ultrasound is a specialized technique used to assess blood flow within blood vessels. It measures the Doppler shift in the frequency of reflected ultrasound waves to visualize the direction and velocity of blood flow. Color Doppler Ultrasound: This modality adds color to the Doppler images to represent the direction and speed of blood flow. It is widely used in vascular and cardiac imaging. Power Doppler Ultrasound: Power Doppler is a sensitive technique that provides information about blood flow even in low-velocity vessels. It is useful for detecting slow or weak blood flow, particularly in small vessels. Contrast-Enhanced Ultrasound: Contrast agents are used in this modality to enhance the visualization of blood flow and the characterization of lesions. Microbubbles within the contrast agents produce echoes that enhance the ultrasound signal. Applications of Ultrasound: Ultrasound is a versatile medical imaging technique with a wide range of applications across various medical specialties. Here are some of the key applications of ultrasound: - Obstetrics and Gynecology: - Monitoring fetal development during pregnancy. - Assessing the position and health of the fetus. - Determining the baby’s gender. - Detecting multiple pregnancies. - Evaluating the placenta and amniotic fluid levels. - Abdominal Imaging: - Visualizing the liver, gallbladder, pancreas, spleen, and kidneys. - Detecting and characterizing abdominal masses, cysts, or tumors. - Assessing the gastrointestinal tract and blood vessels within the abdomen. - Cardiac Imaging (Echocardiography): - Evaluating the structure and function of the heart. - Assessing heart valve function and blood flow. - Diagnosing congenital heart defects. - Monitoring cardiac conditions and assessing heart damage. - Musculoskeletal Imaging: - Examining muscles, tendons, ligaments, and joints. - Identifying sports injuries, fractures, and soft tissue abnormalities. - Guiding injections and aspirations for pain relief. - Vascular Imaging: - Assessing blood flow in arteries and veins. - Detecting and diagnosing conditions such as deep vein thrombosis (DVT) and arterial stenosis. - Guiding vascular procedures and surgeries. - Breast Imaging: - Complementing mammography in breast cancer screening. - Evaluating breast lumps and abnormalities. - Assessing the vascularity of breast lesions. - Thyroid and Neck Imaging: - Evaluating thyroid nodules and masses. - Assessing neck masses and lymph nodes. - Guiding fine-needle aspiration (FNA) procedures. - Emergency and Trauma: - Rapid assessment of internal injuries in trauma cases. - Identifying the presence of fluid or blood in body cavities. - Guiding emergency procedures like chest tube insertion. - Guidance and Interventional Ultrasound: - Assisting in ultrasound-guided biopsies and aspirations. - Guiding needle placements for various medical procedures. - Urological Applications: - Visualizing the kidneys, bladder, and prostate. - Diagnosing urinary tract infections and kidney stones. - Guiding prostate biopsies. - Ophthalmic Ultrasound: - Evaluating the structures of the eye, such as the retina and lens. - Assessing conditions like retinal detachment or tumors. - Pelvic Imaging: - Evaluating pelvic organs in both men and women. - Diagnosing conditions like ovarian cysts, uterine fibroids, and prostate enlargement. - Pediatric Imaging: - Assessing various conditions in infants and children, including congenital anomalies and developmental disorders. - Gastrointestinal (GI) Imaging: - Visualizing the gastrointestinal tract to diagnose conditions like Crohn’s disease, diverticulitis, and appendicitis. - Imaging the brain and spinal cord in neonates and infants. - Detecting intracranial hemorrhage or congenital brain abnormalities. Principles of Ultrasound: The principles of ultrasound involve the use of high-frequency sound waves to create images of the inside of the body. Here are the fundamental principles of ultrasound: - Sound Wave Generation: Ultrasound imaging begins with the generation of high-frequency sound waves, typically in the range of 1 to 20 megahertz (MHz). These sound waves are beyond the range of human hearing. - Piezoelectric Effect: Ultrasound transducers, which are handheld devices used in ultrasound machines, contain piezoelectric crystals. When an electrical voltage is applied to these crystals, they vibrate and emit sound waves. Conversely, when sound waves strike the crystals, they produce electrical signals. - Sound Wave Propagation: The generated ultrasound waves are directed into the body by the transducer. The waves propagate through tissues and organs. - Echo Formation: As ultrasound waves encounter different tissues with varying densities, some of the waves are reflected (echoed) back toward the transducer. The amount of reflection depends on the tissue’s density and acoustic properties. - Echo Detection: The transducer not only emits sound waves but also acts as a receiver. It detects the echoes of the waves that bounce back from within the body. - Signal Processing: The returning echoes are converted into electrical signals, and their timing and strength are analyzed. A computer processes these signals to create visual representations of the echoes in real time. - Image Formation: The processed signals are used to create grayscale images where different shades of gray correspond to the intensity or strength of the returning echoes. The timing of the echoes helps determine the depth of structures within the body. - Real-Time Imaging: Ultrasound is known for its real-time imaging capabilities. It can continuously produce images, allowing for the visualization of moving structures, such as a beating heart. - Transducer Movement: To obtain different views and angles, the ultrasound transducer is moved or rotated over the area of interest on the patient’s body. This allows for the scanning of various cross-sections and dimensions. - Display: The images created through signal processing are displayed on a monitor for the medical professional to interpret. - Anatomy Visualization: Different tissues and structures within the body reflect ultrasound waves differently. Fluid-filled structures, like cysts, often appear black on ultrasound, while denser tissues, like bone, appear white. - Doppler Effect: In addition to creating static images, ultrasound can also measure the Doppler shift in the frequency of reflected waves, allowing for the assessment of blood flow velocity and direction. - Color Doppler and Power Doppler: Specialized modes of ultrasound, such as color Doppler and power Doppler, use color to represent the direction and speed of blood flow within vessels, aiding in vascular and cardiac imaging. Ultrasound Equipment and Technology: Ultrasound equipment and technology have evolved significantly since the inception of ultrasound imaging. Today, modern ultrasound machines are sophisticated devices equipped with advanced features. Here are key components and technologies associated with ultrasound equipment: - Transducer: The transducer is the core component of an ultrasound machine. It contains piezoelectric crystals that emit and receive ultrasound waves. Different transducers are designed for various applications, such as abdominal, cardiac, or transvaginal imaging. - Probe or Probe Head: The transducer is often referred to as the probe or probe head. It is placed directly on the patient’s skin, and a coupling gel is applied to facilitate the transmission of sound waves between the probe and the body. - Control Panel: The control panel allows the operator to adjust imaging parameters such as frequency, depth, gain, and focus. It also provides access to different imaging modes. - Display: Modern ultrasound machines are equipped with high-resolution monitors that display real-time images in grayscale or color. Some machines offer touchscreen displays for user interaction. - Keyboard and Controls: These input devices enable the operator to select options, annotate images, and adjust settings. Keyboards may be physical or integrated into the touchscreen interface. - Doppler Capabilities: Many ultrasound machines have Doppler capabilities to assess blood flow. Color Doppler displays blood flow direction and velocity in color, while spectral Doppler provides waveform analysis. - 3D and 4D Imaging: Some ultrasound machines support 3D and 4D imaging, allowing for the creation of three-dimensional images and real-time 4D videos of moving structures. - Image Storage and Archiving: Modern ultrasound systems include storage and archiving capabilities to save images and patient data. These can be stored digitally and easily retrieved for future reference. - Image Processing: Sophisticated image processing algorithms enhance the quality of ultrasound images. This includes noise reduction, edge enhancement, and image optimization. - Networking and Connectivity: Ultrasound machines may have network capabilities for sharing images and data with other healthcare systems or for remote consultation. - Portable and Handheld Devices: Portable ultrasound machines are compact and lightweight, suitable for point-of-care use. Handheld ultrasound devices are even smaller and offer convenience for various applications. - Wireless Transducers: Some modern systems incorporate wireless transducers, eliminating the need for cable connections and increasing flexibility during scanning. - Elastography: Elastography is a technology that assesses tissue stiffness or elasticity, aiding in the differentiation of benign and malignant lesions. - Contrast-Enhanced Ultrasound: This technology involves the use of contrast agents to enhance visualization, particularly in the assessment of blood flow and lesion characterization. - Artificial Intelligence (AI): AI and machine learning algorithms are increasingly integrated into ultrasound systems to assist in image analysis, automate measurements, and enhance diagnostic accuracy. - Telemedicine Integration: Ultrasound machines may support telemedicine and remote consultation by enabling real-time sharing of ultrasound images and patient data. - Workflow Optimization: Modern ultrasound systems aim to streamline workflow with features like customizable presets and automatic calculations for measurements. - Transesophageal and Intracavity Probes: Specialized probes, like transesophageal and intracavity probes, are designed for specific applications, such as cardiac imaging and transrectal examinations. - Biopsy and Interventional Guidance: Ultrasound machines are often used to guide biopsies, aspirations, and other interventional procedures, providing real-time visualization during the procedure. - Maintenance and Service Tools: Maintenance and diagnostic tools are included to ensure the reliability and performance of the ultrasound equipment. Procedure and Reporting: Procedure and reporting in medical imaging, including ultrasound, are essential aspects of patient care and diagnosis. Here is an overview of the typical procedure and reporting process for ultrasound examinations: - Patient Preparation: - The patient may be required to fast for a certain period before the exam, particularly for abdominal or pelvic ultrasounds. - Depending on the type of ultrasound, the patient may need to wear a hospital gown or remove specific clothing items. - Informed Consent: - In some cases, patients are asked to provide informed consent before the procedure, especially if it involves invasive or contrast-enhanced techniques. - The patient is positioned to expose the area of interest, and the ultrasound technologist (sonographer) or healthcare provider applies a water-based gel to the skin. This gel facilitates sound wave transmission between the transducer and the body. - Transducer Placement: - The ultrasound transducer (probe) is placed on the patient’s skin over the area of interest. - The transducer may need to be moved or repositioned to obtain different views or images. - Image Acquisition: - The sonographer or healthcare provider captures images of the internal structures by directing the transducer to the desired location. - Real-time images are displayed on the monitor during the procedure. - Image Optimization: - The operator adjusts imaging parameters, such as frequency, depth, gain, and focus, to optimize the quality of the images. - Doppler modes may be used to assess blood flow if necessary. - The sonographer documents relevant findings during the procedure, including measurements, observations, and images. - Patient Comfort and Communication: - Throughout the procedure, the operator communicates with the patient, explaining the process and providing reassurance. - Completion of the Examination: - Once the necessary images and data are obtained, the ultrasound examination is completed. - Image Review: - The acquired images and data are reviewed by a radiologist or healthcare provider with expertise in medical imaging. - The interpreting physician analyzes the images and assesses the findings in the context of the patient’s clinical history and symptoms. - Report Generation: - A formal medical report is generated, summarizing the examination, findings, and impressions. - The report may include a description of the structures imaged, any abnormalities detected, measurements, and relevant clinical recommendations. - The report is typically communicated to the referring physician or healthcare provider who requested the ultrasound examination. - In some cases, the report may be discussed directly with the patient. - Depending on the findings, further diagnostic tests or treatments may be recommended, and the patient’s management plan is determined. - The ultrasound images and the corresponding report are archived in the patient’s medical record for future reference and comparison. Advantages and Limitations of Ultrasound: Ultrasound imaging has several advantages and limitations, which are important to consider when using this diagnostic modality. Here’s a breakdown of the key advantages and limitations of ultrasound: - Safety: Ultrasound does not use ionizing radiation, making it safe for repeated use, especially in sensitive populations like pregnant women and children. - Non-Invasiveness: Ultrasound is a non-invasive imaging technique that does not require the insertion of needles, tubes, or other invasive instruments into the body. - Real-Time Imaging: Ultrasound provides real-time imaging, allowing for the observation of moving structures within the body, such as the beating heart or fetal movements during pregnancy. - Dynamic Assessment: It can assess dynamic processes like blood flow, making it valuable for assessing vascular conditions and cardiac function. - Portability: Ultrasound machines vary in size, with portable and handheld devices available for point-of-care and emergency use, enhancing accessibility. - Cost-Effective: Compared to other imaging modalities like MRI or CT scans, ultrasound is generally more cost-effective. - Versatility: Ultrasound is used in various medical specialties, including obstetrics, cardiology, musculoskeletal, and emergency medicine. - Minimal Discomfort: Patients typically experience minimal discomfort during the procedure, as it is painless and non-invasive. - No Contrasting Agents (in most cases): Unlike some other imaging modalities, ultrasound often does not require the use of contrasting agents, reducing the risk of allergic reactions. - No Special Preparations: Patients usually do not need to fast or undergo special preparations before an ultrasound examination, making it more convenient. - Limited Penetration: Ultrasound waves do not penetrate dense or air-filled structures well, limiting their effectiveness in visualizing structures behind bone or gas-filled areas. - Operator-Dependent: The quality of ultrasound images depends on the operator’s skill and experience. Inexperienced operators may produce lower-quality images. - Limited Tissue Characterization: While ultrasound can show structural details, it may not provide detailed information about tissue composition, making it less effective in distinguishing between benign and malignant lesions. - Obesity and Patient Factors: Obesity and certain patient factors, such as excessive gas or movement, can make ultrasound imaging more challenging. - Operator Fatigue: Performing prolonged ultrasound examinations can be physically demanding for operators. - Inability to Image Bone: Ultrasound is not suitable for visualizing bone structures, which absorb and block ultrasound waves. - Limited Field of View: The field of view in ultrasound is often limited, requiring the operator to scan multiple areas for a comprehensive assessment. - Lower Resolution in Deep Tissues: The resolution of ultrasound images decreases with increasing depth, making it less effective for imaging structures deep within the body. - Limited Visualization of Certain Organs: Some organs, like the lungs, are difficult to visualize with ultrasound due to their air-filled nature. - Difficulty with Thick Soft Tissues: Thick layers of soft tissue may hinder the penetration of ultrasound waves and limit image quality. Safety and Precautions in Ultrasound: Safety and precautions in ultrasound are essential to ensure the well-being of both patients and healthcare providers during ultrasound examinations. Here are some key safety considerations and precautions: - Pregnancy: Ultrasound is generally considered safe during pregnancy and is commonly used for monitoring fetal development. However, it’s essential to use ultrasound judiciously, following medical guidelines and recommendations. - Informed Consent: Patients should be informed about the procedure, its purpose, and any potential risks or benefits. Informed consent should be obtained before performing an ultrasound, especially for invasive or contrast-enhanced procedures. - Allergies: Patients should inform the healthcare provider of any known allergies, particularly if a contrast agent (microbubble) is being used. - Dress Code: Patients may be asked to wear a hospital gown or remove specific clothing items to ensure proper access to the area being examined. - Gel Sensitivity: Some patients may be sensitive or allergic to the ultrasound gel. Healthcare providers should be aware of this possibility and use alternative gels if necessary. - Patient Comfort: Ensuring patient comfort during the examination is important. Proper positioning and communication can help minimize any discomfort. For Healthcare Providers: - Training and Certification: Healthcare providers, particularly sonographers and radiologists, should have appropriate training and certification in ultrasound imaging to ensure safe and accurate examinations. - Gel Quality: Ensure the ultrasound gel used is of good quality and does not cause skin irritation or allergic reactions. - Transducer Disinfection: Properly disinfect and clean ultrasound transducers between patients to prevent cross-contamination and the spread of infections. - Ergonomics: Healthcare providers should maintain proper ergonomic posture during scanning to prevent work-related injuries and musculoskeletal strain. - Appropriate Use: Use ultrasound judiciously, following clinical guidelines and recommendations. Avoid unnecessary or repetitive examinations, particularly in sensitive populations. - Invasive Procedures: When performing invasive procedures, such as ultrasound-guided biopsies or aspirations, follow strict aseptic techniques to minimize the risk of infection. - Patient Privacy: Ensure patient privacy and dignity by using drapes or curtains during examinations when appropriate. - Radiation Safety: In situations where ultrasound and other imaging modalities (e.g., fluoroscopy or CT) are used together, be aware of radiation safety protocols and measures. - Contrast Agent Safety: If using contrast agents (microbubbles) during the examination, ensure they are administered following established protocols and with proper monitoring. - Emergency Response: Be prepared for emergencies, such as severe allergic reactions to contrast agents. Have necessary medications and equipment on hand and know how to use them. - Continuing Education: Stay updated on the latest developments in ultrasound technology, safety guidelines, and best practices through continuing education and training. - Documentation: Accurate and thorough documentation of the ultrasound procedure, findings, and any complications is essential for patient records and future reference. 1. What is ultrasound? Ultrasound, or ultrasonography, is a medical imaging technique that uses high-frequency sound waves to create real-time images of the inside of the body. 2. How does ultrasound work? Ultrasound works by emitting sound waves from a transducer into the body. These waves bounce off internal structures and return as echoes, which are then processed to create images. 3. Is ultrasound safe? Yes, ultrasound is considered safe because it does not involve ionizing radiation. It is widely used in prenatal care and medical imaging. 4. What are the common applications of ultrasound? Ultrasound is used in various medical specialties, including obstetrics, cardiology, musculoskeletal imaging, vascular imaging, and more. 5. Can ultrasound detect cancer? Ultrasound can help identify and characterize certain types of cancerous and non-cancerous masses and lesions. However, it may not be the primary diagnostic tool for all types of cancer. 6. Is ultrasound used for gender determination during pregnancy? Yes, ultrasound can often determine the gender of a fetus during pregnancy, typically around the 18-20 week mark. 7. What is the difference between 2D, 3D, and 4D ultrasound? 2D ultrasound provides two-dimensional cross-sectional images, while 3D ultrasound creates three-dimensional images. 4D ultrasound, or real-time 3D ultrasound, captures moving 3D images, offering dynamic views. 8. How should I prepare for an ultrasound examination? Preparation instructions can vary depending on the type of ultrasound. In general, you may be asked to fast for certain exams, wear comfortable clothing, and arrive with an empty bladder if necessary. 9. Are there any risks or limitations associated with ultrasound? Ultrasound is considered safe, but it has limitations, such as limited penetration through bone or air-filled structures. Additionally, the quality of images can be operator-dependent. 10. Can ultrasound be used for guidance during medical procedures? Yes, ultrasound is often used for guidance during procedures like biopsies, aspirations, and injections to visualize the target area in real time. 11. How long does an ultrasound examination typically take? The duration of an ultrasound examination varies depending on the type and complexity of the exam. It can range from a few minutes to over an hour. 12. Can ultrasound be used to monitor blood flow in vessels? Yes, Doppler ultrasound is a technique used to assess blood flow within arteries and veins, helping diagnose vascular conditions. 13. Are there any special considerations for pediatric or elderly patients during ultrasound? Pediatric and elderly patients may require special care and attention during ultrasound exams due to their unique needs and potential limitations. 14. Can I have an ultrasound if I’m pregnant? Yes, ultrasound is commonly used during pregnancy for prenatal monitoring and assessment of fetal development. It is considered safe for pregnant women. 15. How can I find a qualified ultrasound technician or sonographer? It’s advisable to seek healthcare facilities or imaging centers with accredited ultrasound departments and certified sonographers for your examinations. In conclusion, ultrasound is a valuable medical imaging technique that uses high-frequency sound waves to create real-time images of the internal structures of the body. Its advantages include safety, non-invasiveness, real-time imaging, and versatility across various medical specialties. However, ultrasound also has limitations, such as limited penetration through bone and operator-dependent image quality. Safety and precautions are critical in ultrasound to protect both patients and healthcare providers, including considerations for patient comfort, informed consent, and proper transducer disinfection. Ultrasound continues to evolve with advancements in technology, including 3D and 4D imaging, artificial intelligence integration, and portable and handheld devices, expanding its applications and accessibility in modern healthcare. Possible References Used
Most people get that some snakes are dangerous and others are not, but not everybody understands how to distinguish between venomous snakes from harmless ones. Many inaccurate traditional guidelines exist, which could cause life-threatening mistakes for laypersons. One such mistaken guideline suggests that all venomous snakes have elliptical eyes; however, round, elliptical and even keyhole-shaped pupils occur in venomous species. Factors Influencing Pupil Shape The presence or absence of venom has no correlation with pupil shape, and venomous snakes feature a variety of pupil shapes. For many years, herpetologists thought that elliptical pupils were an adaptation that allowed snakes to see in the dark. However, a 2010 study by F. Brischoux, L. Pizzatto and R. Shine with the University of Sydney, published in the "Journal of Evolutionary Biology," came to different conclusions. After comparing the pupil shapes, activity patterns, hunting styles and phylogeny of numerous snakes, the researchers demonstrated that ambush hunters typically have vertically elliptical pupils, while actively foraging snakes have round pupils. Vertical pupils afford better vision of animals moving in a horizontal plane at varying distances than round pupils do. Vertical pupils were also correlated with nocturnal species, though not as strongly as with ambush hunting modes. The family Elapidae -- which includes cobras (Naja ssp.), mambas (Dendroaspis ssp.) and taipans (Oxyuranus ssp.) -- includes some of the deadliest snakes on the planet, all of which have round pupils. Additionally, sea snakes (Hydrophiidae) have incredibly powerful venom and round pupils. In North America, where the myth of the correlation between pupil shape and venom may have originated, coral snakes (Micrurus ssp.) have round pupils. Elliptical or catlike pupils are common among many different snake lineages; they're common to all vipers and pit vipers, such as copperheads (Agkistrodon contortrix ssp.) and rattlesnakes (Crotalus ssp.). Consistent with the conclusions of the 2010 study by Brischoux and colleagues, most vipers and pit vipers are ambush hunters who wait for prey to come to them. Additionally, several rear-fanged colubrids, such as brown tree (Boiga irregularis) and mangrove snakes (Boiga dendrophila), possess vertical, catike pupils. Vine snakes of genus Ahaetulla have unique pupils among snakes -- they are horizontally elongated and have a complex, keyholelike shape. These are thought to play a role in the focusing abilities of these snakes, who possess uncommonly good binocular vision relative to the vision of other snake species. Vine snakes do not possess well-developed fangs like vipers and cobras do; instead, they have enlarged, grooved teeth at the rear of their mouths. Vine snakes primarily use their venom for prey acquisition, and bites do not typically produce serious medical symptoms in humans. - Journal of Evolutionary Biology: Insights Into the Adaptive Significance of Vertical Pupil Shape in Snakes - Neuro Dojo: Snake Eyes - Herps of North Carolina: Frequently Asked Questions - Ecology Asia: Oriental Whip Snake - Savannah River Ecology Laboratory: Copperhead - Savannah River Ecology Laboratory: Coral Snake - Ecology Asia: Gold-Ringed Cat Snake - Boiga Dendrophila
Thyristors are four-layer semiconductor switches with alternating layers of P- and N-type materials. While all thyristors share the same basic structure, the details of their implementation and packaging can be modified to meet the needs of specific applications. This FAQ reviews basic phase control thyristor (PCT) operation, then looks at the use of bidirectional controlled thyristors (BCTs) and bidirectional phase control thyristors (BiPCTs) in utility-scale power conversion systems, and closes with a look at bypass thyristors (BTs) and asymmetric BTs designed to ensure the reliable operation and prevent explosions in high power modular multilevel converters. PCTs are used as phase-controlled current ‘valves’ for high voltage ac-input power converters, mostly operating at ac line frequencies but sometimes at frequencies up to about 1 kHz. They can be found in power converters, battery chargers, resistance heaters, lighting controls, and industrial motor drives. While they can block very high voltages, they also have very low on-state resistances and can produce high-efficiency converters. PCT-based converters employ phase fired control (PFC), sometimes called phase angle control or phase cutting, to modulate the power through the device. PFC is used with power sources that have modulated waveforms, such as the sinusoidal ac found on utility power grids. That’s in contract with pulse width modulation used to control power transfer on dc power buses. To implement PFC, it’s necessary to know the modulation frequency and cycle of the power source. That information makes it possible to switch the thyristor on at the correct point in the cycle to transfer the desired amount of energy. A PFC can synchronize with the modulation present at the input. Like a switched-mode power supply buck topology, PFC can only result in a maximum output level equal to the input minus any losses in the conversion process. The proliferation of applications for PCTs has resulted in a wide range of devices optimized for various performance criteria such as low conduction losses, low forward voltage drop, or low stored charge. For example: - PCTs with low conduction losses can be especially useful in crowbars, static switches, and some high voltage power supply designs - PCTs with low switching losses are suited for bridge rectifiers and high-power drives. - PCTs with low stored charge are designed for higher frequency power conversion applications. Bidirectional phase control thyristors A bidirectional control thyristor (BCT) consists of two thyristors integrated on the same silicon wafer with separate gate contacts. BCTs are designed to replace triacs in high voltage applications. Triacs can be used for voltages up to about 1 kV. Above that level, the required thickness of a triac makes uniform control of the device via a single gate impractical. The design of the gate structure is important in BTCs to achieve fast turn-on and prevent interference between the constituent thyristors. The device needs to be compact, but there needs to be adequate separation between the two thyristors to prevent the combined device from being destroyed by high dV/dt values that can cause uncontrolled triggering after commutation. The bidirectional phase control thyristor (BiPCT) was developed to improve BCTs’ operational characteristics. A BiPCT has two thyristors in an anti-parallel configuration on a single wafer with separate gate terminals for each thyristor (Figure 2). As with a BCT, one of the gates turns on the current in the forward direction, and the other gate turns on the current in the reverse direction. Among the benefits of BiPCTs compared with BTCs are increased surge current ratings, reduced thermal resistance, and reduced cost through simplified fabrication. In addition to the gate design considerations in a BCT, a BiPCT uses interdigitation of the anode and cathode regions of both anti-parallel connected thyristors. When designing BCTs or BiPCTs, one challenge is to have a performance curve between the on-state voltage (VT) and the recovery charge (Qrr) for each of the integrated devices that are as close as possible to a single PCT device. The use of BiPCTs delivers cost advantages in terms of smaller system volume of the active components and smaller snubbers and control circuits. The simplification of the thyristor valve assembly can result in 30% lower costs than discrete PCTs. Since there are fewer components, the reliability of a BiPCT-based thyristor valve assembly should be significantly higher than a similar assembly based on discrete PCTs, assuming that the BiPCT has similar reliability as a PCT. Reliability is a key requirement for utility-scale modular multilevel converters (MMCs). MMCs are expected to continue operation even in the event of the failure of one of the modules. BTs are sacrificial devices and were developed specifically to meet that need. A MMC is expected to provide serial redundancy and have the ability to reliability discharge the energy stored in a cell and short the cell’s terminals in the event of a fault. The stored energy in a high-power MMC is often large enough to rupture the housing of a conventional thyristor, resulting in external arcing, possible capacitor explosions, or ruptures of electrical connections. Thyristor faults are expected in these systems. Before the availability of BTs, the cell terminals were shorted during a fault event using a mechanical switch. A mechanical solution adds to solution size and cost and can be unreliable. Bypass thyristors were developed to provide a lower cost and more robust option. In normal operation, the BT is OFF and does not affect cell operation. The BT is instantly turned on to handle the energy surge when a fault occurs. In an MMC application, when a cell fails, the BT housing will not rupture even when experiencing currents as high as 363 kA and an I2t up to 217 MA2s. After a fault occurs, the BT can continue to operate as a stable short circuit (taking the bad cell out of operation) for over a year until the next scheduled maintenance occurs. At this time, the MMC is de-energized, and the faulty cell module can be replaced. For example, after a fault event, a typical BT can conduct 1,300 Arms with a voltage drop under 1.75 Vrms for over a year with no detrimental effects. In addition to optimal device design, packaging is a key to achieving the levels of performance expected from a BT (Figure 3). Maintaining package integrity during high-energy discharges requires the inclusion of extra space inside the cathode pole element. That space provides an expansion volume that decreases internal gas pressure in the device and improves heat dissipation from the plasma during a fault event. In the figure below, the expansion volume (bluish green) is shown in the package’s lid (upper orange area). In addition, the ceramic package walls are lined with a silicone rubber strip (green) to help prevent the wall from cracking in the presence of excessive plasma that the expansion volume space cannot absorb. Finally, a labyrinth seal between the ceramic wall and the lid provides additional protection to the cathode sealing flange. Asymmetric BTs for IGBTs For most thyristors, VDRM, and VRRM, the maximum repetitive peak forward and reverse blocking voltage, respectively, are the same. They are called symmetric devices and are designed for use with normal AC voltages. In asymmetric BTs, VDRM and VRRM are not the same. These devices are designed for higher frequency applications such as voltage sourced multilevel converters (VSMCs) based on IGBT modules. They are designed to withstand fast voltage transients resulting from the IGBT diode’s switching. Asymmetric BTs for use with IGBT-based converters are available with VDRM of 1,000 V, VRRM up to 4,500 V, and current ratings over 3,000 A. The thyristors feature dynamic on-state voltage with turn-on times optimized to divert excessive current from the IGBT diode. Like the BTs used to protect thyristors, these asymmetric BTs support smaller solutions and increased reliability. They also support higher voltage operation in the converter. Thyristors are unidirectional power switches and have been optimized for various applications. PFC is the most common thyristor control technique. It is used to control the phase angle where the device turns on during an input ac power cycle to modulate the amount of power transferred from the input to the output of a power converter. BCTs and BiPTCs that can turn on in both directions have been developed to replace triacs in high voltage applications that benefit from two-way power flows. BTs operate as sacrificial devices in high-power thyristor-based MMCs to ensure continuous system operation even in the event of a fault in one of the cells, and asymmetric BTs have been developed to serve the same purpose in IBGT-based VSMCs. An 8.5kV Sacrificial Bypass Thyristor with Unprecedented Rupture Resilience, 2019 31st International Symposium on Power Semiconductor Devices and ICs (ISPSD) Bidirectional Phase Controlled Thyristors, ABB Phase Fired Controller, Wikipedia Sacrificial Bypass Thyristor for MMC applications, ABB The Bidirectional Phase Control Thyristor, IEEE Transactions on Electron Devices Triggering Light Triggered Thyristors, Infineon
The United States Environmental Protection Agency (EPA) established an Air Quality Index (AQI) to measure air pollutants. A higher AQI, with color codes and corresponding numbers (ranging from 0 to 500), means a greater health concern. (Local AQI information is available on various apps and websites, including www.airnow.gov.) Particle pollution, also known as particulate matter (or PM), is a type of air pollutant made up of tiny particles of solids or liquids suspended in the air. It’s one of the main components of wildfire smoke, which is a mix of gases and fine particles from burning vegetation, as well as building and other materials. Particulate matter includes PM10, inhalable particles that are 10 micrometers and smaller in diameter, and PM2.5, inhalable particles with diameters of 2.5 micrometers and smaller. PM2.5 poses a greater health risk than PM10 because the particles are so small (30 times smaller than the diameter of a human hair) and can get deep into the lungs and bloodstream. Although air pollution is not good for anyone, certain groups are more sensitive to it than others, including those with heart or lung disease, older adults, infants and children, and pregnant women. As the AQI levels increase, the risk of health effects increases, especially among these more sensitive groups. “The advice to limit strenuous activities is because when your respiratory rate is higher, you inhale more particulates,” says Dr. Redlich. When the AQI is 201 and higher, everyone should be concerned about health risks and limit physical activity outdoors as much as possible, she adds. For context, with the recent wildfires in Canada, the PM2.5 AQI climbed above 400 for a brief period in New York City in early June.
A spectrophotometer is one of the scientific instruments commonly found in many research and industrial laboratories. Spectrophotometers are used for research in physics, molecular biology, chemistry, and biochemistry labs. Typically, the name refers to Ultraviolet-Visible (UV-Vis) Spectroscopy. The energy of light is dependent on its wavelength, usually designated as lambda. Although the electromagnetic spectrum extends over a tremendous range of wavelengths, most laboratories can only measure a small fraction of them. UV-Vis Spectroscopy measures between 200 and 400 nanometers (nm) for UV light measurements, and up to approximately 750 nm in the visible spectrum. For UV-Vis Spectroscopy, samples are usually contained and measured in small containers called cuvettes. These can be plastic if used in the visible spectrum, but need to be quartz or fused silica if used for UV measurements. There are some machines that can utilize glass test tubes. Visible Spectroscopy is often used industrially for colorimetry. Using this method, samples are measured at multiple wavelengths from 400-700 nm, and their profiles of absorbance are compared to a standard. This technique is frequently used by textile and ink manufacturers. Other commercial users of UV-Vis Spectroscopy include forensic laboratories and printers. In biological and chemical research, solutions are often quantified by measuring their degree of light absorption at a particular wavelength. A value called the extinction coefficient is used to calculate the concentration of the compound. For instance, molecular biology laboratories use spectrophotometers to measure the concentrations of DNA or RNA samples. They sometimes have an advanced machine called a NanoDrop™ spectrophotometer that uses a fraction of the amount of sample compared to that used by traditional spectrophotometers. For quantification to be valid, the sample must obey the Beer-Lambert Law. This requires that the absorbance be directly proportional to the path length of the cuvette and the absorption of the compound. There are tables of extinction coefficients available for many, but not all, compounds. Many chemical and enzymatic reactions change color over time, and spectrophotometers are very useful for measuring these changes. For instance, the polyphenol oxidase enzymes that cause fruit to brown oxidize solutions of phenolic compounds, changing clear solutions to ones that are visibly colored. Such reactions can be assayed by measuring the increase in absorbance as the color changes. Ideally, the rate of change will be linear, and one can calculate rates from this data. A more advanced spectrophotometer will have a temperature-controlled cuvette holder to carry out the reactions at a precise temperature ideal for the enzyme. Microbiological and molecular biology laboratories frequently use a spectrophotometer to measure the growth of cultures of bacteria. DNA cloning experiments are often done in bacteria, and researchers need to measure the growth stage of the culture to know when to carry out certain procedures. They measure the absorbance, which is known as the optical density (OD), on a spectrophotometer. One can tell from the OD whether the bacteria are actively dividing or whether they are starting to die. Spectrophotometers use a light source to shine an array of wavelengths through a monochromator. This device then transmits a narrow band of light, and the spectrophotometer compares the light intensity passing through the sample to that passing through a reference compound. For instance, if a compound is dissolved in ethanol, the reference would be ethanol. The result is displayed as the degree of absorbance of the difference between them. This indicates the absorbance of the sample compound. The reason for this absorbance is that both ultraviolet and visible light have enough energy to excite the chemicals to greater energy levels. This excitation results in a higher wavelength, which is visible when the absorbance is plotted against wavelength. Different molecules or inorganic compounds absorb energy at different wavelengths. Those with maximum absorption in the visible range are seen as colored by the human eye. Solutions of compounds can be clear, but absorb in the UV range. Such compounds usually have double bonds or aromatic rings. Sometimes there are one or more detectable peaks when the degree of absorption is plotted against wavelength. If so, this can aid in the identification of some compounds by comparing the shape of the plot against that of known reference plots. There are two types of UV-Vis spectrophotometer machines, single-beam and double-beam. These differ in how they measure the light intensity between the reference and test sample. Double-beam machines measure the reference and test compound simultaneously, while single-beam machines measure before and after the test compound is added.
Jackson's chameleon is usaully camouflaged green, but can change clor to blend in with its habitat. The chameleon is a deadly hunter, but is also a tempting target for many other predators. It maximizes its chances both as hunter and hunted by disappearing from view. Jackson’s chameleon has the remarkable ability to change its body color to blend in with the shades of yellow, green and brown of its habitat. The male possesses three horns to use to intimidate rivals during territorial and breeding disputes. The female has only one horn. Each of its eyes lies at the tip of a swiveling turret. The eyes move independently, letting the chameleon see in two directions at once. His skin cells contain pigments that can be changed at will, allowing the chameleon to turn pale or dark when irritated, green when relaxing or darken to absorb heat. Jackson’s chameleon is a mountain chameleon. Fully adapted to life in the trees, it lives in the wooded valleys of East Africa’s highland areas. Jackson’s chameleon may be a hunter, but it remains vulnerable to other predators, such as birds of prey, because it’s fairly small. Its first line of defense is its color. In its natural state, the chameleon’s green skin blends subtly into its surroundings. The chameleon’s leaflike shape also helps conceal it among the foliage. Jackson’s chameleon walks slowly and deliberately, often with a rocking, jerky motion. It moves through the branches in search of insects, spiders and other prey. The chameleon’s eyes move independently of each other, letting it see in two directions at once. When the chameleon spots prey, both eyes focus on the target. The moment its within range of the prey, the chameleon shoots out its incredibly long tongue, snaring its victim on the sticky, mucus-coated tip. Almost as rapidly, the prey is pulled into the mouth. The entire snatch is very fast, the tongue extends and retracts in about one quarter of a second and the chameleon rarely misses its target. The chameleon’s ability to change color is its most dramatic line of defense. It changes color simply by moving the pigments in its skin, which has many separate layers full of complex color cells. The chameleon changes color by expanding or shrinking the pigment cells and by covering or revealing the reflective cells; a calm chameleon arranges its yellow cells over blue reflecting cells, resulting in a green body color, the color of its habitat. Similar species of chameleons may live in the same habitat, so Jackson’s chameleons have a skin color code to recognize other of their species. In fact, a male may not mate with a female until she displays the correct colors. The chameleon is unusual because it keeps the fertilized eggs inside its body until the embryos are fully developed. Each egg is covered by a thin, sticky membrane, the female fastens it carefully to a branch. Offspring emerge immediately, 2”long replicas of adults, complete with tiny horns. There may be 40 gray, dark green or black offspring with white side markings. Within hours of birth the young chameleons can catch small insects using their long, sticky tongues. One of the greatest threats to chameleons lies it the international pet trade. The other threat is habitat destruction. Across Africa and Madagascar, forests have been destroyed to make way for cultivated crops and development.
Everyone needs sleep. Sleep helps to rejuvenate the body, heal the body and assist the brain in assessing the actions of the day that has passed. How much should one sleep? March 13 is World Sleep Day and so it the perfect day to talk about the power of sleep and what happens if we don’t sleep. Did you know that a person can die more rapidly from lack of sleep than from lack of food? Scientists believe that a person can die 3 times quicker from sleep deficiency than from food deprivation. After only 11 days the body shows signs of failure: loss of speech, weakening of eyesight, and deterioration of the ability to think. According to the UK National Health Service this is how much sleep our students should be getting: 5 years night time: 11 hours 6 years night time: 10.75 hours 7 years night time: 10.5 hours 8 years night time: 10.25 hours 9 years night time: 10 hours 10 years night time: 9.75 hours 11 years night time: 9.5 hours 12/13 years night time: 9.25 hours 14 years night time: 9 hours 15 years night time: 8.75 hours 16 years night time: 8.5 hours So how much sleep are your students getting? Scientists state that teenagers need lots of sleep. Teachers have known this for decades! Are your students aware that not getting enough sleep is damaging to their bodies? It might even be a good excuse for failing a test or not knowing an answer! “Sorry Miss. I didn’t sleep well last night.” Your student might be right! Not sleeping means the brain does not have time (and rest) to process what was learnt during the day and so memory is reduced. Lack of sleep is closely tied to depression; many insomniacs are depressed. Sleeplessness is also linked to accidents – you become more accident prone when sleepy. Not only are you more prone to accidents, your body doesn’t heal if it doesn’t sleep and you can even suffer from heart problems in later life as a result. Talk about good and bad activities to do just before going to sleep. Naturally, all activities involving relaxing are good. For example reading, listening to gentle music or even the CD from the graded text the language teacher has prescribed, watching a relaxing film on TV, having a warm bath, going to bed at the same time every night. On the other hand, activities that involve raucous laughing, running, indeed anything that sends the heart rate up, are deemed not helpful. Remind your students that reading, studying and writing in bed before going to sleep are not to be encouraged as these activities need to be separated from actually sleeping. How many synonyms do your students know for ‘sleep’? There are verbs ,nouns , adverbs and/or adjectives: a power sleep, a catnap, to catch 40 winks, to study one’s eyelids from the inside, to snooze, to doze, to kip, to slumber, to nod off, a siesta, a rest, to get (a bit of) shut eye, to catch a few zs, to drowse, to have a beauty sleep, to put one’s head down, to be in the land of Nod, to be in the arms of Morpheus, to be asleep, to sleep like a log. For a more fun type of activity, look at the worksheet where the students have to match the personality types to the basic sleeping position.
They are a rich source of soy and whey proteins, which help build lean muscle mass and maintain healthy bones. What are proteins? Proteins are molecules formed by amino acids that are linked by a type of bonds known as peptide bonds. The order and arrangement of amino acids depend on the genetic code of each person. All proteins are composed of: And most also contain sulfur and phosphorus. Proteins account for approximately half the weight of body tissues, and are present in all body cells, in addition to participating in virtually all biological processes that occur. HOW CAN IT HELP YOU? Proteins from foods such as soy or dairy are called "complete" proteins because they contain a balance of all essential amino acids (or building blocks) for muscle growth and maintenance. Your body needs 1g of protein for every kg of body weight.
Localized Antibiotic Administration What is Localized Antibiotic Administration? Antibiotics are drugs which, when prescribed at traditional doses, either kill or prevent reproduction of bacteria. Antibiotics are important in dentistry, because most dental illness (including tooth decay and periodontal disease) is caused by bacteria. Antibiotics are not designed to kill viruses or fungi, although some of those organisms may be susceptible to certain antibiotics. There are many different types of antibiotics, which function in different ways and target specific types of bacteria. For example, the penicillin family of antibiotics share a functional molecule called a beta lactam ring, which binds to certain bacteria and prevents them from being able to build the cell wall necessary for their survival. Through many years of over-prescription, some bacteria have developed resistance to certain antibiotics (including the penicillin family). To overcome this, new methods of delivering antibiotic drugs only to infected tissue (instead of system-wide) have been developed, along with ways to prevent the undesirable activity of bacteria, without killing them outright. Suppressing a certain species of bacteria with antibiotics can allow unsusceptible species to flourish, upsetting the delicate balance that exists among the species that live in our mouths, stomach, and intestines. It can also encourage the targeted bacteria to mutate into forms which are resistant to antibiotic treatment. An example of this newer technology is the use of low-dose doxicycline (20mg) to suppress human-generated enzymes that form in response to bacteria from destroying the periodontal tissues (gums and bone). At this dose, the bacteria themselves are not killed or prevented from reproducing, but their harmful activity is halted. Because the balance of bacterial flora is not being altered, it is believed that the drug may be safely taken in pill form without the risk of resistant strains of bacteria developing. This drug is commonly used as a supplement to traditional periodontal treatment to control periodontal disease (periodontitis). Another relatively new method of preventing tissue-destroying bacterial activity around the teeth is to place antibiotics into gum pockets around the teeth in any of several time-release systems that are currently available. This is commonly being done as a supplemental procedure to traditional periodontal treatment, such as scaling and root planing. When scaling and root planing alone does not decrease inflammation of the periodontal structures, treatment of the gum pocket(s) with locally applied, time-release antibiotics may be prescribed. Some dentists prescribe the antibiotics at the time scaling and root planing is performed, believing that those bacteria which are not removed through mechanical debridement will be killed or prevented from multiplying by the drug. Opponents of this concurrent treatment protocol cite it as a sort of belt-and-suspenders overtreatment approach. However, with many studies showing that scaling and root planing alone is not completely effective at removing all of the calculus and diseased cementum, localized antibiotic administration may be a good thing to consider. This is especially true in deeper pockets, where some studies show that scaling and root planing may remove as little as 11% of the calculus! Localized antibiotic administration may not help individuals with aggressive periodontitis, and has been shown to be most effective in adults with chronic, localized periodontitis. The process of administering Localized Antibiotics Typically, treatment with locally administered antibiotics is done either at the time of, or following scaling and root planing. Before scaling and root planing procedures are done, it is necessary to establish a diagnosis of periodontal disease (periodontitis). On the date of scaling and root planing, your health history will be reviewed. You may require pre-medication with systemic antibiotics if you have prosthetic joints, prosthetic heart valves, or certain types of heart murmurs. Your physician may also recommend pre-medication if you have certain other medical conditions (e.g. transplants). If you take blood thinning medication, your dentist may recommend suspension of the medication prior to scaling and root planing. This should be coordinated with your physician, of course. Due to the potential for discomfort in instrumenting tooth root surfaces, local anesthetic is typically given for scaling and root planing procedures. If local antibiotic treatment is being performed at a subsequent appointment to the scaling and root planing procedures, it is often not necessary to have local anesthetic. There is presently controversy about whether or not use of lasers to clean the gum pockets produces better results than scaling and root planing alone. It is not currently clear whether or not there is any long-term advantage. Many doctors have cited positive results in their own practices, but prospective, randomized, multi-center studies that are the foundation of “good science” are currently lacking. When the gum pockets have been cleaned, administration of locally applied antibiotics can be performed. There are a number of systems available for delivering the antibiotics into the gum pocket, and the process of placing them is different for each. Ask your dental professional which method they recommend for your specific needs. Following treatment with localized antibiotics, your dental professional will often give specific instructions on how best to manage your unique dental situation. Frequently you will be instructed not to floss in the area of application for the period of activity of the drug, which varies by product. There may be other specific instructions. When plaque and calculus removal is deemed to be effective, and no inflammation remains, probing pocket depths will be measured, and an appropriate interval for periodontal maintenance visits will be established. More on ToothIQ.com What is MOST IMPORTANT to you when choosing a dentist? Thank you!
Topic - The Story of London All children should be able to: Talk about some of the key events of the Great Fire of London. Say why the Great Fire of London spread and eventually stopped. Explain that we know about the Great Fire because of Samuel Pepys' diary. Most children will be able to: Explain how we know about the Great Fire of London from a variety of primary sources. Show awareness of how London has changed, including its buildings, people and transport. Write a report about the Great Fire of London. Some children will be able to: Explain their reasons why some sources are more useful than others in their historical enquiry. Start questioning the reliability of some historical evidence. Imagine and write about the experiences of people in different historical periods based on factual evidence. - year, century - Britain, London - capital city - past, present, old, modern, change - River Thames - Range of 17th century jobs: chimney sweep, blacksmith, apothecary, rat- catcher, gong farmer, spinster, chandler, scullery maid, carpenter, fire fighter - Samuel Pepys - wind, fire, burning, houses, close together - King Charles II - Lord Mayor - Sir Christopher Wren - St Pauls Cathedral Topic - Everyday Materials - Types of materials: wood, plastic, glass, metal, water, rock, brick, fabric, sand, paper, flour, butter, milk, soil - Properties of materials: hard/soft, stretchy/not stretchy, shiny/dull, rough/smooth, bendy/not bendy, transparent/not transparent, sticky/not sticky - Verbs associated with materials: crumble, squash, bend, stretch, twist - Senses: touch, see, hear, smell and taste
Safety Net - Keeping children safe online Because 90% of what children do online, they do outside school, it is vitally important that parents and carers are involved and upskilled if we are to help keep children safe online. Safety Net is a unique educational programme designed to provide teachers with the tools and confidence to engage children, and then their parents and carers, in a concerted effort to encourage safe online practice in the home. Safety Net teacher training you will receive: • A Home Office PREVENT approved CPD training day that will equip you with the skills, knowledge and confidence to deliver engaging lessons on how to keep children safe online to Key Stage 2 and 3 pupils • An excellent teachers resources pack: lesson plans, rationales, delivery options, certificates and many fun activities – saving on hours on preparation time • Delivery options to run hard-hitting and well-structured parental workshops • A class set of Safety Net classroom exercise books to capture the learning and development • A class set of Safety Net parental engagement homework books, the unique toolkit that conveys the messages taught in the classroom, into the homes. Designed to build digital resilience and open vital communication channels between children, parents and schools, this programme tackles the potential dangers of sharing personal information, sexting, social media, cyberbullying, gaming, grooming, fake news, hate crime and radicalisation. Contact us now to enquire and book onto a future CPD training date near you The children’s experience BEFORE Safety Net - Nearly all were unclear about what was personal information and what was not, and had shared personal information with others online. They were in general ignorant of the dangers posed by the internet, social media, and online gaming. - Nearly all had already had negative personal experiences online. - Nearly all said that their parents didn’t really check what they were doing online. - A huge number of pupils visit websites or play online games that are NOT age-appropriate. AFTER Safety Net - The process of completing the books and sharing the information at home was enjoyable; they also valued their completion certificates. - They now understand what personal information is, and how important it is not to share it online. They now understand what the words ‘cyberbullying’ and ‘sexting’ mean. - They now know that not everyone online is who they say they are, and evil people do exist. - They now understand the dangers of strangers online, and playing age inappropriate games. - They know what ‘grooming’ is, and how online methods can be used to exploit people in relation to child abuse, gangs, and radicalisation. - They are now aware that they should always feel able to speak to a trusted adult, without fear of someone getting cross with them, if they feel uncomfortable about something they have encountered. Aims – Keeping Children Safe Online SKIPS Safety Net programme communicates key messages and raises awareness of: • the potential dangers of sharing personal information • social media •sexting• blackmail •cyberbullying• gaming• online grooming• fake news• hate crime• radicalisation• child sexual exploitation - We explore the potential risks of sharing too much personal information and who we share it with on social media - We look at what sexting is, when it becomes a criminal offence, the damage it can do long term and how it can be used for blackmail. - We discuss cyberbullying, trolling and the devasting effects it has on children and what can be done to stop it - We look at the games children are playing online, how to get involved, set parental controls and talk about their use - We look at how fake news and hacking can be used to trick people into believing lies and making the wrong decisions - We raise awareness of the types of methods and tricks people use online to groom for radicalisation, commit hate crimes and sexual exploitation. At the ends of the session teachers will have a greater understanding on how they can help to keep children safe online. The SKIPS Safety Net parental engagement books are the toolkits to convey the the learning from the classroom into the home and encourage open discussions between teachers, pupils and parents on: - the dangers of sharing too much personal information online and on social media - what is sexting, when it is a criminal offence, the long-term risks and blackmailing - what is cyberbullying, trolling and how do to stop it - the dangers of playing age inappropriate online games, the damaging content and parental safety settings - how not to fall victim of fake news and safeguard against hacking - understanding and recognising the processes used by online groomers and how to safeguard against radicalisation, hate crime and sexual exploitation School testimonial disclosures stimulated by Safety Net Apart from the general and intended benefits of this programme, the gaining of disclosures is a significant plus. “The internet safety training has given me the confidence to not only teach the children how to be safe, but also to run parent safety workshops…” St. Clements Academy
When discussing diet and nutrition, we often fail to address the foundation of it all, which are nutrients! There are six classes, consisting of protein, carbohydrates, fat, water, vitamins, and minerals. - A macronutrient that provides 4 kilocalories per gram, - Often referred to as the building block of the body. - Protein is essential for the development and repair of body tissues, hormones, enzymes, and the maintenance of the immune system (Washington State University). - Protein should account for 10-35% of our daily energy intake. - Lean protein options include nonfat Greek yogurt, chicken breast, shrimp, whitefish, lean cuts of beef, low-fat cheese, etc. - A macronutrient that provides 4 kilocalories per gram. - They function as the best source of fuel for the brain and body during high intensity exercise. - They also serve as a source of fiber, which can lower Cholesterol/ blood sugar and prevent constipation (National Institute of Health). - Carbohydrates should account for 45-65% of one’s diet. - Healthy sources include rice, oats, beans, lentils, potatoes, and chickpeas. - A macronutrient that is more energy dense than the others, providing 9 kilocalories per gram! - Its functions include being an energy source, regulating body temperature, and acting as a shock absorber for our organs (Washington State University). - Fats should account for about 20-25% of one’s diet. - Healthy sources include avocados, olive oil, almonds, eggs, salmon, and walnuts. Vitamins and Minerals - Referred as micronutrients because we need them in small amounts. - These are found in whole foods, fruits, and vegetables. - Although they do not provide our bodies with energy, they allow for proper functioning of the body! - Water is a nutrient that makes up a significant amount of our body composition! - It is necessary for cell structure maintenance, body temperature regulation (through sweat). - It protects our brain and spinal cord from trauma, and lubricates our joints (USGS). Males require about 3 liters of water per day, while females require about 2.2 liters per day!
High resolution NMR spectra are routinely obtained today, but there are various requirements which must be fulfilled to produce high quality spectra from which a great deal of useful data can be obtained. Today, most NMR is of the pulse variety, in which all the nuclei in the sample are placed in a static magnetic field and are then excited by a pulse of electromagnetic radiation. The emissions as they relax back to their equilibrium state are then monitored. The magnetic fields used in NMR are generally very strong (2 – 12 Tesla or more), and the stronger the field the better quality the spectrum obtained. There are several reasons for this: Firstly, the patterns of multiplets (the components into which a resonance peak may be split by spin-spin coupling) can be resolved much more clearly. This is because the difference between the chemical shifts at which nuclei resonate is proportional to the field strength, but the spin-spin coupling constant is independent of the field strength. Thus higher strength fields reduce overlap between different multiplets. Secondly, higher field strength will increase signal intensity. It is generally the case that in NMR spectroscopy signal intensities are very low, as they are proportional to the difference in population between the two states. This difference is usually small compared to the thermal energy kT, and thus promotion from the lower to the upper level occurs readily at room temperature. The use of strong magnetic fields increases the energy separation between the two states, as the energies of the states are proportional to the field strength. This makes the population imbalance larger and the signal stronger . (This is also one reason why the proton is such a favoured nucleus in NMR. The energy levels are also proportional to the magnetogyric ratio of the nucleus being studied, and thus the larger the magnetogyric ratio, the larger the gap between energy levels and the larger the population imbalance, so the stronger the signal. The magnetogyric ratio is a fixed property of a given nucleus, but the proton happens to have the second largest value of γ of any nucleus. This makes it ideal for NMR studies. Another useful property of the proton as an NMR nucleus is its high natural abundance, which contributes to the strong NMR signal and also means that coupling between 1H nuclei can be observed in the spectrum. This increases the amount of structural data that interpretation of the spectrum can yield.) It is also important that the magnetic field be homogeneous (of equal strength at all points throughout the sample), or otherwise signals would become blurred as equivalent nuclei throughout the sample would resonate at slightly different frequencies. Homogeneity is achieved by constructing the magnets used for the apparatus in such a way that they generate opposing field gradients which cancel each other out. The sample is usually also spun rapidly. This means that the nuclei pass rapidly throughout any remaining field gradients, with the effect that they resonate at an average frequency in the spectrum. In practice, the technique of two-dimensional NMR is a very important refinement of the basic NMR procedure. The details of this need not concern us; suffice it to say that it is possible to separate the effects of spin-spin coupling and the chemical shift in a spectrum, and the results can be displayed along two different axes. This can greatly simplify the appearance and interpretation of the spectrum. The general appearance of such a spectrum is as follows: The numbers on the axes give the chemical shift, δ, in ppm. The contours that lie on the diagonal represent the normal peaks in a 1D spectrum. Thus for the spectrum shown above, there would be three different resonant peaks in the one dimensional spectrum, that correspond to three different magnetic environments (A, B and C). Some very useful information is given by the off-diagonal peaks, as they indicate that the protons lying on the same horizontal and vertical lines are spin coupled. Thus in the above diagram, we can immediately state that A and B nuclei are coupled together, as are the A and C nuclei. The B and C nuclei are not spin-coupled. Although this example is quite a trivial one, it should be apparent how useful this technique might be in the interpretation of more complex molecules.
The passage of time has always been a preoccupation of human beings, whether it be a question of satisfying basic needs such as when to eat and sleep, the importance of seasons for migratory and agricultural purposes or a more sophisticated measuring of time into defined periods of weeks, days and hours. Using Celestial Bodies The earliest method of measuring time was through observation of the celestial bodies - the sun, moon, stars and the five planets known in antiquity. The rising and setting of the sun, the solstices, phases of the moon, and the position of particular stars and constellations have been used in all ancient civilizations to demarcate particular activities. For example, Egyptian and Minoan buildings were often constructed in orientation to the rising sun or aligned to observe particular stars. Some of our earliest texts such as those by Homer and Hesiod around the 8th century BCE describe the use of stars to specifically determine the best periods to sail and farm, advice which remains valid today. Star calendars were created in the Near East, and Greek calendars were likely based on the phases of the moon. The Greek Parapegmata from the 5th century BCE, attributed to Meton and Euctmon, was used to map a star calendar and a calendar of festivals linked to astronomical observations survives in an Egyptian papyrus from Hibeh dated to around 300 BCE. The celebrated Antikythera Mechanism, dated to the mid-1st century BCE and found in an Aegean shipwreck, is a sophisticated device which, through a complicated arrangement of wheels and gears, demonstrated and measured the movement of celestial bodies, including eclipses. The sun continued to be the primary source of time measurement throughout the Classical Period. Indeed, sunrise and sunset determined the sessions of both the ancient Assembly of Athens and the Roman Senate, and in the latter, decrees decided after sunset were not deemed valid. Early sundials merely indicated months but later efforts attempted to break the day into regular units and indicate the twelve hours of the day and night first invented by the Egyptians and Babylonians. The origins of the half-hour measurement are unclear but it is mentioned in a 4th-century BCE Greek comedy by Menander and so must have been commonly used. The earliest surviving sundial dates from Delos in the 3rd century BCE. From Hellenistic times the measurement of time became ever more precise and sundials became more accurate as a result of a greater understanding of angles and the effect of changing locations, in particular latitude. Sundials came in one of four types: hemispherical, cylindrical, conical, and planar (horizontal and vertical) and were usually made in stone with a concave surface marked out. A gnomon cast a shadow on the surface of the dial or more rarely, the sun shone through a hole and so created a spot on the dial. In the Roman Empire, portable sundials became popular, some with changeable discs to compensate for changes in location. Public sundials were present in all major towns and their popularity is evidenced not only in archaeological finds - 25 from Delos and 35 from Pompeii alone - but also in references in Greek drama and Roman literature. There is even a famous joke on the subject attributed to Emperor Trajan, who, when noticing the size of someone’s nose, quipped: "If you put your nose facing the sun and open your mouth wide, you’ll show all the passerby the time of day" (Anthologia Palatina 11.418). By Late Antiquity (c. 400 to 600 CE) highly sophisticated portable sundials were produced which could be adjusted to as many as 16 different locations. Time measuring devices were also invented which used water. Perhaps evolving from earlier oil lamps, which were known to burn for a set period of time with a defined quantity of oil, the early so-called water-clocks released a specified quantity of water from one container to another, taking a particular time to do so. Perhaps the earliest came from Egypt around 1600 BCE, although they may have borrowed the idea from the Babylonians. The Greeks used such a device (a klepsydra) in Athenian law courts and it determined how long a single speech could last: approximately six minutes. The Greek and Roman army also used water-clocks to measure shift-work, for example, night watches. More sophisticated water-clocks were developed which poured water into the device thereby raising a floating drum and consequently turning a cog whose regulated movement could be measured. The first such clocks are attributed to Ctesibius around 280 BCE and Archimedes is largely credited with developing the device to achieve greater accuracy. Large public water-clocks were also common and often measured a whole day, for example in the 4th century BCE agora of Athens there was such a clock which contained 1000 litres of water. The 2nd-century BCE Tower of the Winds in Athens, built by Andronicus, also contained a large water-clock and no less than nine sundials on its outer walls.
LANGUAGE-BASED LEARNING DIFFICULTIES In simple terms... Language is how you develop ideas and formulate sentences Language based learning difficulties often result in difficulties with the school curriculum. Children with these difficulties show deficits in one or all of the following areas: speaking, understanding, reading and writing. In speaking, children have difficulty finding the right word, expanding vocabulary as new topics are introduced, formulating good sentences, sequencing events and retelling stories. In understanding, they may have difficulty following instructions, understanding jokes and sarcasm and understanding grammar. In reading, they may have difficulty decoding words, comprehending stories, making predictions and inferences. In writing, they may have difficulty putting their thoughts on paper, using spelling and grammar correctly and linking paragraphs. Young children may have difficulty acquiring speech and language skills as listed above. Mistakes can often be part of this process. Some children will correct their mistakes over time, while others will continue to struggle. If this is the case, your child may benefit from speech and language therapy. A speech-language pathologist will develop a comprehensive plan of treatment to develop core language based skills and address the interaction between language based skills and cognition.
What is Mindfulness? “What day is it?” asked Pooh. “It’s today,” squeaked Piglet. “My favourite day,” said Pooh. – A A Milne Mindfulness is the psychological process of bringing one’s attention to experiences occurring in the present moment. While many people think mindfulness means meditation, this is not the case. Mindfulness is a mental state of openness, awareness and flexible attention. Mindfulness meditation is just one way among many of learning to cultivate this state. There are many therapeutic applications, like Acceptance and Commitment Therapy (ACT) based on mindfulness that help people who are experiencing a variety of psychological conditions. Mindfulness is recognised as effective in the reduction of both rumination and worry, reducing stress, increased fulfilment, raises self-awareness, enhances emotional intelligence, and undermines destructive emotive, cognitive, and behavioural processes. By noticing and accepting difficult thoughts and emotions and not trying to struggle against them can help to alleviate and address them more effectively.
Children need much more than the core subject areas and basic academic reading and writing skills. The visual arts also play a vital role in a child’s education. In the Schalmont Central School District, the K-12 Art Department provides a multitude of opportunities for children to be creative and express themselves through different kinds of art medium. Classroom experiences in the visual arts enable students to create new knowledge through visual form and become more well-rounded individuals during their school years and beyond. Art in Schalmont Elementary Art Curriculum Students in Kindergarten through fifth grade will explore various materials and techniques; which will enable students to actively engage in the creative art making process. Throughout each year students will be introduced to a range of artists, art movements, and key moments in history based on their grade level. Students will experiment and become familiar with key art elements and principles to build a strong foundation for their future years. Primarily in the younger grades, additional time will be spent focusing on fine motor skills and experimentation with hands-on activities. Each artwork will give students ample opportunities to make choices, think critically, problem solve, and enjoy various fun experiences that art allows. Students will also be encouraged to take risks, make mistakes, and learn from them in order to build determination and confidence. All lessons are created with the students in mind, and they are based upon the New York State Visual Art Standards. Many lessons also incorporate various New York State English Language Art Standards and other disciplinary elements Middle School Art Curriculum Students attend art class everyday for ten weeks each year of sixth, seventh, and eighth grade. Each consecutive year reinforces/refines and builds upon prior studio art experience. Students use a wide variety of materials to create two and three dimensional projects with constant reference to examination and understanding of the elements and principles of design. Artwork produced by students are displayed in the hallways around the building, on a gallery wall in the library and regional art shows. Students also create a portfolio of work available on the online student art gallery Artsonia.com. Students are regularly exposed to works of both historical and contemporary artists, art movements and ideas, and how the work they do in the classroom relates (has relevance) to the larger art world of ideas and images. Verbal and written art criticisms are conducted to describe, analyze, interpret and evaluate a variety of artworks. High School Art At Schalmont High School, courses are available for students with an interest in majoring in art, as well as for those students desiring to take one or two introductory courses in the field. New York State Visual Arts Standards 1. Generate and conceptualize artistic ideas and work. 2. Organize and develop artistic ideas and work. 3. Refine and complete artistic ideas and work. 4. Analyze, interpret and select artistic work for presentation. 5. Develop and refine artistic techniques and work for presentation. 6. Convey meaning through the presentation of artistic work. 7. Perceive and analyze artistic work. 8. Interpret intent and meaning in artistic work. 9. Apply criteria to evaluate artistic work. 10. Synthesize and relate knowledge and personal experiences to make art. 11. Relate artistic ideas and works with societal, cultural and historical context to deepen understanding.
Our research aims to understand how humans and other mammals co-exist with enormous numbers of microbes in the intestine. We live with enormous numbers of microbes on our body surfaces: these are collectively termed our microbiota. The intestine is a special surface, because it is a continuous tube from the mouth to the anus, which digests the food we eat and absorbs our intestinal secretions and the fluid that we drink. The lower intestine has the highest concentration of microbes anywhere – equal approximately to the number of cells in our bodies. These microbes are not only passengers that profit from the warm environment, living off the food that we cannot digest, but they rather freely exchange chemicals with our bodies. Although the biomass of the microbiota is mainly restricted to our body surfaces, except when we suffer from an infection, the chemical exchange means that all our organ systems are shaped by the presence of these microbes. Many conditions, including inflammatory bowel disease (Crohn’s disease and ulcerative colitis), forms of arthritis, allergy, diabetes, liver disease and obesity are associated with alterations in the microbiota or defects in how our bodies adapt to their presence: our long-term goal is to provide a fundamental understanding of these events to be able to provide new treatments for patients with these conditions. Part of our research is directly in human patients, for example to study the composition and chemical exchange of the microbiota in patients with inflammatory bowel disease and to determine how this affects the course of the disease and its response to therapy. Although we are born almost sterile, body surfaces become rapidly colonized after birth – once this has happened there is no way back to sterility because antibiotics will reduce but not eliminate the microbes. One way around this problem is to study mice that are kept under sterile conditions without a microbiota. These mice are perfectly healthy: it is possible to introduce a microbiota, to follow the changes that take place after colonization of body surfaces with microbes and with exchange of their metabolic (chemical) products. In mice, it is also possible to ensure that every animal has essentially the same composition of bacteria in the microbiota, rather than the differences from individual to individual in the human population. This allows us to model the fundamental chemical interactions that shape the body and its susceptibility to disease.
If we were to make telescopes with Fresnel lenses instead of regular lenses, would it be more practical? There is a Wikipedia article about using Fresnel zone plates successfully in the visible spectrum. A paper about it called: "First high dynamic range and high resolution images of the sky obtained with a diffractive Fresnel array telescope" was written by Laurent Koechlin et al. in 2011: "... this concept is well fitted for space missions in any spectral domain, from far UV (100 nm) to IR (25 μm). However, the advantages of Fresnel arrays become outstanding in the UV domain, for which a 6 m size aperture yields milliarcseconds angular resolutions with a very rough manufacturing precision compared to standard optics: typically 50 μm compared to 10 nm.". From Laurent Koechlin's paper: "Fig. 1 Top: Fresnel array, close view on the central zones. Bottom: sketch of our prototype, not to scale for clarity. On the real prototype, the Fresnel lens and the achromat doublet are much smaller than the entrance Fresnel array. The distance between the Fresnel array and the field optics is 18 m. The rest of the light path is short (2 m). The zero-order mask blocks the light that has not been focused by the Fresnel array: all diffraction orders are blocked, except one. The achromat forms the final image after chromatic correction by the secondary Fresnel lens.". Another flat lens is the metalens which uses structures on a flat surface to focus each frequency individually, but "Blade Optics" (prisms) might prove to be the most practical, more below. Would this also allow for much bigger telescopes to be made? Yes, they will be bigger; but not in a good way: "Fresnel arrays of 6m and larger have focal lengths of a few kilometers in the UV; they will require two satellites flying in formation around Lagrangian point L2, but with tolerant positioning in translation (a few centimeters).". With modern technology that isn't considered completely impractical. "Blade Optics" have both supporters and naysayers. They have been tested by RASC and a few patents (US 2017 / 0307864 A1) have been issued. The principle is simple: NexOptix has a working prototype which is far shorter than an equivalent focal length refractor (straight tube), Newtonian (right angle eyepiece) or even Schmidt-Cassegrain (folded) design: That squeezes 146 cm (57.48 inches) of focal length into just 12.7 cm (5 inches), though the other dimensions increase (but not proportionally).
Did the universe start with a Big Bang? No. This is one of the most misleading scientific terms out there. It wasn't Big (at the start) and it wasn't a Bang. But the Big Bang Theory is the leading explanation for how our universe began. What is the Big Bang Theory? Under Big Bang Theory, the universe began around 14 billion years ago. At that time, the entire universe was squashed inside a single point called a singularity. This singularity was infinitely small and infinitely dense. Then the singularity inflated. It inflated with such incredible speed that the universe went from existing as a singularity to creating particles such as protons and neutrons in just one second. Over the next 14 billion years this singularity expanded to form the universe as we see it today. Explaining this expansion is tricky. It's like the galaxies are currants in a lump of dough. When you bake the dough, the currants move further apart as the dough (space) expands. Let's look at a Lego example. Ironman is baking Batman a birthday cake with yellow currants and black dough. This expansion would look a little something like this: As a side note, Ironman sucks at baking. How do we know the Big Bang theory is right? We don't. But there are several key pieces of evidence that the Big Bang theory is correct. First, there's something called the Cosmic Microwave Background Radiation. This is the afterglow of the expansion of the early universe where microwaves are coming from every direction in space. The Big Bang Theory is the only theory to explain its presence. Second, the further away a galaxy is from us, the faster it is moving away from us. Everything is moving away from everything else. If we went back in time everything would eventually be squashed together into one incredibly dense point - or a singularity. This is one of the key concepts of the Big Bang Theory. Also, the more distant a galaxy is, the longer its light takes to reach us (because the speed of light is finite). When we look at these distant (seemingly younger) galaxies, they look very different from closer (seemingly older) galaxies. This suggests the universe is evolving - which does not match one of the leading contenders to Big Bang Theory (Steady State Theory). But this evolution of galaxies does match the Big Bang Theory. Finally, the observed proportions of elements such as hydrogen, helium and lithium in the universe match the predictions of the Big Bang Theory. So, there we go. The Big Bang is less of a cosmic explosion, and more of an "Everywhere Stretch". But I suppose that's not as catchy a name, is it? Extra reading and watching Professor Stephen Hawking's 1996 lecture "The Beginning of Time" describes the Big Bang in more detail, as does this article from CERN and here's another more summary of the Big Bang from Space.com. If you're interested in the Big Bang alternatives out there, click here. Finally, here's a video explaining the basis of the Big Bang: What is Sunday Science? Hello. I’m the freelance writer who gets tech. I have two degrees in Physics and, during my studies, I became increasingly frustrated with the complicated language used to describe some outstanding scientific principles. Language should aid our understanding — in science, it often feels like a barrier. So, I want to simplify these science sayings and this blog series “Sunday Science” gives a quick, no-nonsense definition of the complex-sounding scientific terms you often hear, but may not completely understand. If there’s a scientific term or topic you’d like me to tackle in my next post, fire an email to [email protected] or leave a comment below. If you want to sign up to our weekly newsletter, click here. Hello. I'm the freelance writer who gets tech. So, I blog on three core topics: Science and Technology And I explain science with Lego in Sunday Science. Need help with your blog?
Electric torque is undoubtedly one of the main characteristics of an electric motor. Knowing how to calculate the torque of a motor is essential to perform the correct sizing of the electric motor. In this article, the World of Electrical explains what torque is and how to calculate the electrical torque of an engine. Come on! What is torque? Torque, also known as moment of force, is a force applied to a body that tends to rotate this body. Torque is a vector quantity, that is, it has modulus, direction and direction, and can be calculated by means of the vector product between force and distance. According to the International System, the unit of measurement for torque is Newton times meter (Nm). We need to understand that whenever a given force has applied some distance from the rotation axis of a body, that body will be subject to rotation, that is, whenever we want to make a body rotate around some point, we must exert a torque on this body. Torque x Work When we talk about the electric torque of an engine, it is important to highlight that the electric motor, in addition to torque, also generates work, although the units of measures and formulas are the same, torque and work are different quantities! The main difference is that the torque on the engine exists even if the engine is not running. Torque x Power It is important to note that torque and power are not the same, in fact, torque and power are different and complementary quantities. An example that we can cite is the power generated by an electric motor because the electric power is directly proportional to the torque generated over a given rotation speed. For the same speed of rotation, the greater the power, the greater the torque supplied. Torque x Speed Torque and speed are completely different quantities, but they are quantities that are related. A good example is speed reducers, which through gear systems have the ability to increase torque and reduce engine rotation speed. The torque variation caused by the speed reducers is inversely proportional to the variation caused in the rotation speed. The greater the reduction in speed, the greater the resulting torque, that is, the lower the speed of the motor, the greater the torque of the same motor. In this case, it is very important to note that the electrical power does not increase because of the speed reducer, in fact, the opposite occurs, as it suffers a reduction or loss, which is determined by the internal efficiency or efficiency of the reducer. Reducers that have a higher efficiency are able to convert the power to which they are subjected to more torque, considering the same reduction ratio. Torque: How to calculate? As mentioned earlier, torque is the ratio of distance multiplied by force. For this, we will calculate the torque of an electric induction motor, capable of producing a force of 80N at a distance of 0.25m from the center of its axis. In this case, the calculation is very simple! As we already have all the necessary values to make the calculation, just replace them directly in the torque formula. After replacing the values, just multiply, obtaining a torque of 2,000 Nm, as shown in the example below. If you want to learn much more about electric motors, we have below a video from the Mundo da Elétrica channel that explains in detail what an electric motor is, the characteristics of an electric motor and how an electric motor works. It is worth checking!
To the untrained eye, it looks like a flower crudely etched into rock — as if a child had scratched a picture of a bloom. But to the late fossil hunter Lloyd Gunther, the tulip shape he unearthed at Antimony Canyon in northern Utah looked like the remnant of an ancient marine animal. Years ago, Gunther collected the rock and later gave it to researchers at the University of Kansas’ Biodiversity Institute — just one among thousands of such fossils he donated to the institute over the years. But this find was the only fossilized specimen of a species previously unknown to science — an “obscure” stalked filter feeder. It has just been detailed for the first time in a paper appearing in the Journal of Paleontology. “This was the earliest specimen of a stalked filter feeder that has been found in North America,” said lead author Julien Kimmig, collections manager for Invertebrate Paleontology at the Biodiversity Institute. “This animal lived in soft sediment and anchored into the sediment. The upper part of the tulip was the organism itself. It had a stem attached to the ground and an upper part, called the calyx, that had everything from the digestive tract to the feeding mechanism. It was fairly primitive and weird.” Kimmig researches the taxonomy, stratigraphy and paleoecology of the Cambrian Spence Shale found in Utah and Idaho, where Gunther found the obscure filter feeder. “The Spence Shale gives us soft-tissue preservation, so we get a much more complete biota in these environments,” he said. “This gives us a better idea of what the early world was like in the Cambrian. It’s amazing to see what groups of animals had already appeared over 500 million years ago, like arthropods, worms, the first vertebrate animals — nearly every animal that we have around today has a relative that already lived during those times in the Cambrian.” In honor of fossil hunter Gunther, a preeminent collector who performed fieldwork from the 1930s to the 2000s, Kimmig and Biodiversity Institute colleagues Luke Strotz and Bruce Lieberman named the newly described species Siphusauctum lloydguntheri. The stalked filter feeder is just the second animal placed within its genus, and the first Siphusauctum to be discovered outside the Burgess Shale, a fossil-rich deposit in the Canadian Rockies. “What these animals were doing was filtering water to get food, like micro-plankton,” Kimmig said. “The thing is, where this one was located we only found a single specimen over a period of 60 years of collecting in the area.” Kimmig said it isn’t yet known if the newly discovered stalked filter feeder lived a highly solitary life or if it drifted off from a community of similar animals. “It’s hard to tell from a single specimen,” he said. “There were algae found right next to it, so it likely was transported there. The algae found with it were planktonic algae that were floating themselves. It could have fallen just next to it — but that would be a big coincidence — so that’s why we’re thinking it came loose from somewhere else and got mixed in with the algae.” Kimmig and his KU colleagues say the newly described specimen varies in key areas from similar known species of stalked filter feeders from the Cambrian. “There are several differences in how the animal looked,” Kimmig said. “If you look at the digestive tract preserved in this specimen, the lower digestive tract is closer to the base of the animal compared to other animals. The calyx is very slim — it looks like a white wine glass, whereas in other species it looks like a big goblet. What we don’t have in this specimen that the others have are big branches for filter feeding. We don’t know if those weren’t preserved or if this one didn’t have them.” According to the researchers, there are no species alive today that claim lineage to Siphusauctum lloydguntheri. But Kimmig said there were a few contemporary examples that share similarities. “The closest thing to the lifestyle — but not a relative — would be crinoids, commonly called sea lilies,” he said. “Unfortunately, there’s likely not a relative of Siphusauctum in the world anymore. We have thousands of similar fossil specimens in the Burgess Shale, but it’s hard to identify what these animals actually were. It might be possibly related to contemporary entoprocts, which are a lot smaller than this one — but it’s hard to tell if they’re related at all.” Ultimately, the mysterious stalked filter feeder is a reminder of the strange and vast arc of evolution where species continuously come and go, according to Kimmig. “It is enigmatic because we don’t have anything living that is exactly like it,” he said. “What is fascinating about this animal is we can clearly relate it to animals existing in the Cambrian and then we just don’t find it anymore. It’s just fascinating to see how evolution works. Sometimes it creates something — and it just doesn’t work out. We have some lineages like worms that lived long before the Cambrian and haven’t changed in appearance or behavior, then we have things that were around for a couple of million years and just disappeared because they were chance victims of mass extinctions.” Julien Kimmig, Luke C. Strotz, Bruce S. Lieberman. The stalked filter feeder Siphusauctum lloydguntheri n. sp. from the middle Cambrian (Series 3, Stage 5) Spence Shale of Utah: its biological affinities and taphonomy. Journal of Paleontology, 2017; 91 (05): 902 DOI: 10.1017/jpa.2017.57
Assign this text to deeply engage your students! Actively Learn provides free ELA, science, and social studies digital content—short stories, primary sources, textbook sections, science articles, novels, videos, and more—and embeds them with standards-aligned assignments to assign immediately or customize for your students. Whether you’re looking for “The Tell-Tale Heart,” The Hate U Give, “The Gettysburg Address,” or current science articles and simulations, Actively Learn is the free go-to source to help you guide your students' growth in critical thinking all year. The main types of sedimentary rocks are clastic or chemical. Some sedimentary rocks are a third type, organic. Clastic sedimentary rocks are made of sediments. Chemical sedimentary rocks are made of minerals that precipitate from saline water. Organic sedimentary rocks are made from the bodies of organisms.
GRE Analogy tests are to see your ability to recognize the relationship between a pair of words and the parallel relationships of the two words. To answer a gre analogy question, you must know the how the two first words fit together and then select a matching word for the single word that has the same relationship. There are all kinds of relationships that can go with a gre analogy question. These relationships include kind, size, spatial contiguity, or degree. Analogy is just another word for definition. So it is just basically linking words together to determine the relationship. There are a few ways to get the right answer to one of these type questions. The first way is to figure out a phrase that might go with the word pairings. This helps to find the word that fits into the relationship. The second way is to eliminate the answers that do not make sense in the equation. This will give you a better chance of getting the right answer. The last way is to know that one word can mean several different things. Look over the answers a few times to make sure that a word does not have another meaning. The main way to prepare for an analogy test is to take a gre analogy practice test. The GRE analogy practice questions will provide you with some samples that will be similar to the actual test. This will help you to get familiar with the format of the test. An example of a practice test question is: 1.Red : Rainbow a)Color : Rose b)Saturday : Week c)Drop : Ocean d)Ball : Team e)Patient : Ward In this question, you will need to decide what the relationship is between red and rainbow. Of course, we know that a rainbow has colors in it and red is one of the colors in the rainbow. A is not correct because rose is not part of a color. C is not correct because drops do not make an ocean. D is not correct because team does not consist of balls. D is not correct because a ward does not make patients. So the correct answer is B because a week has days and Saturday is one of those days.
What is Induction Sealing? Induction sealing is a common packaging process to bond a foil seal across the opening of container, jar or bottle. Usually the foil seal is placed into the cap or closure of the product to be sealed and then the cap or closure is placed over the filled product. The product then passes beneath an induction sealing machine and without ever touching the product the machine bonds the foil to the opening of the product. How Induction Sealing Takes Place? As soon as the container has been filled and capped – with a closure fitted with an induction liner – it travels beneath the induction sealing system. The non-contact heating system welds the liner to the container resulting in an air-tight and water-tight seal. Induction sealing machines or sealers, sometimes referred to as heat sealers, facilitate a product seal by emitting a high energy concentrated electromagnetic field. This field is created using high current thick windings (coils of wire), with voltage and current that switches back and forth at many thousands of times per second. The electromagnetic field passes through the air and the plastic of the containers cap to reach the Aluminum of the Induction Liner. The field then generates high frequency alternating currents on the surface of the foil which causes the foil to heat up. Bonded to the Aluminum is a heat seal layer – usually a polymer layer that will melt with heat. When this heat sealing layer melts its bonds with the top surface opening of the container and as it cools it “sets” in place with the neck or rim of the container. The Food and Drug Administration regards induction sealing as a highly effective method of tamper evidence.
Training students to be a better digital citizens As you are aware, children today spend a lot of time on technology and social media. While technology offers kids great opportunities for self-exploration, creativity, connection and learning, kids may also experience challenges like oversharing and damage to reputation, cyberbullying, not understanding how to analyze and evaluate the credibility of information and assess the dangers posed to them. To harness technology for learning, kids need to understand how to think critically, behave safely, and participate responsibly online. For this reason, the School will be partnering with Common Sense Media to train students to be better digital citizens by teaching them to think critically, behave safely and participate responsibly online. As a pilot project, this training will be given to students of grades 6 and 7 during their I.C.T. periods. Using a detailed plan and resources from Common Sense Education, it is aimed at teaching students digital literacy and citizenship skills and engaging parents along the way also. The objective is to help kids thrive in a world of media and technology by teaching them how to use media and technology wisely. You can also check commonsensemedia.org to learn how to harness digital media to help your child interact positively in a digital world.
Global warming is on track to cause a major wipeout of insects, compounding already severe losses, according to a new analysis. Insects are vital to most ecosystems and a widespread collapse would cause extremely far-reaching disruption to life on Earth, the scientists warn. Their research shows that, even with all the carbon cuts already pledged by nations so far, climate change would make almost half of insect habitat unsuitable by the end of the century, with pollinators like bees particularly affected. However, if climate change could be limited to a temperature rise of 1.5C – the very ambitious goal included in the global Paris agreement – the losses of insects are far lower. The new research is the most comprehensive to date, analysing the impact of different levels of climate change on the ranges of 115,000 species. It found plants are also heavily affected but that mammals and birds, which can more easily migrate as climate changes, suffered less. “We showed insects are the most sensitive group,” said Prof Rachel Warren, at the University of East Anglia, who led the new work. “They are important because ecosystems cannot function without insects. They play an absolutely critical role in the food chain.” “The disruption to our ecosystems if we were to lose that high proportion of our insects would be extremely far-reaching and widespread,” she said. “People should be concerned – humans depend on ecosystems functioning.” Pollination, fertile soils, clean water and more all depend on healthy ecosystems, Warren said. In October, scientists warned of “ecological Armageddon” after discovering that the number of flying insects had plunged by three-quarters in the past 25 years in Germany and very likely elsewhere. “We know that many insects are in rapid decline due to factors such as habitat loss and intensive farming methods,” said Prof Dave Goulson, at the University of Sussex, UK, and not part of the new analysis. “This new study shows that, in the future, these declines would be hugely accelerated by the impacts of climate change, under realistic climate projections. When we add in all the other adverse factors affecting wildlife, all likely to increase as the human population grows, the future for biodiversity on planet Earth looks bleak.” They then calculated how the ranges change when global warming means some regions can no longer support particular species. For the first time in this type of study, they included the 1.5C Paris target, as well as 2C, the longstanding international target, and 3.2C, which is the rise the world will experience by 2100 unless action is taken beyond that already pledged. The researchers measured the results in two ways. First, they counted the number of species that lose more than half their range and this was 49% of insect species at 3.2C, falling to 18% at 2C and 6% at 1.5C. Second, they combined the losses for each species group into a type of average measure. “If you are a typical insect, you would be likely to lose 43% of your range at 3.2C,” Warren said. “We also found that the three major groups of insects responsible for pollination are particularly sensitive to warming.” Guy Midgley, at University of Stellenbosch, South Africa and not part of the research team, said the new work built on previous studies but is far more comprehensive. He said major impacts on wildlife would be expected given the potential scale of climate change: “Global average surface temperatures in the past two million years have rarely approached the levels projected over the next few decades.” Warren said that the world’s nations were aware that more action on climate change is needed: “The question is to what extent greater reductions can be made and on what timescale. That is a decision society has to make.” Another study published in Science on Thursday found that one third of the world’s protected areas, which cover 15% of all land, are now highly degraded by intense human pressure including road building, grazing, and urbanisation. Kendall Jones, at the University of Queensland, Australia, who led the work, said: “A well-run protected area network is essential in saving species. If we allow our protected area network to be degraded there is a no doubt biodiversity losses will be exacerbated.”
Rosacea: Who gets and causes Who gets rosacea? Rosacea is common. According to the U.S. government, more than 14 million people are living with rosacea. Most people who get rosacea are: Between 30 and 50 years of age Fair-skinned, and often have blonde hair and blue eyes From Celtic or Scandinavian ancestry Likely to have someone in their family tree with rosacea or severe acne Likely to have had lots of acne — or acne cysts and/or nodules Women are a bit more likely than men to get rosacea. Women, however, are not as likely as men to get severe rosacea. Some people are more likely to get rosacea, but anyone can get this skin disease. People of all colors get rosacea. Children get rosacea. What causes rosacea? Scientists are still trying to find out what causes rosacea. By studying rosacea, scientists have found some important clues: Rosacea runs in families. Many people who get rosacea have family members who have rosacea. It is possible that people inherit genes for rosacea. The immune system may play a role. Scientists found that most people with acne-like rosacea react to a bacterium (singular for bacteria) called bacillus oleronius. This reaction causes their immune system to overreact. Scientists still do not know whether this can cause rosacea. A bug that causes infections in the intestines may play a role. This bug, H pylori, is common in people who have rosacea. Scientists cannot prove that H pylori can cause rosacea. Many people who do not have rosacea have an H pylori infection. A mite that lives on everyone’s skin, demodex, may play a role. This mite likes to live on the nose and cheeks, and this is where rosacea often appears. Many studies found that people with rosacea have large numbers of this mite on their skin. The problem is some people who do not have rosacea also have large numbers of this mite on their skin. A protein that normally protects the skin from infection, cathelicidin, may cause the redness and swelling. How the body processes this protein may determine whether a person gets rosacea.
Diamondoid nanoparticles improve electron emitters Diamondoids are nanoparticles made of only a handful of carbon atoms, arranged in the same way as in diamond, forming nanometer sized diamond crystals. Previously, researchers at the Advanced Light Source (ALS) at Lawrence Berkeley National Laboratory demonstrated the fascinating capability of these tiny little diamonds to act as a monochromator for electrons. In short, a thin layer of diamondoids deposited on a metal surface will first capture the electrons ejected from the surface below due to its negative electron affinity. These electrons, which are emitted from the metal with a wide variety of kinetic energies—or colors—are then re-emitted by the diamondoid layer but with a very narrow energy distribution. This property, which is unique to diamondoid, is believed to enable the development of a new generation of electron emitters with unprecedented properties. In photoemission electron microscopy (PEEM), electrons emitted from a sample due to x-ray irradiation are used to obtain images of the chemical or magnetic properties of a surface with high spatial resolution. However, chromatic aberrations limit the resolution typically to 20 nm due to the wide energy distribution of the emitted electrons. While it is possible to add expensive and complex correction elements to the microscope, this experiment demonstrates that the energy spread can instead be reduced right at the source by using an inexpensive, simple coating of diamondoids. Using this approach, the resolution of the PEEM3 microscope at ALS Beamline 11.0.1 improved significantly and researchers were able to attain chemical information from 10 nm Au nanoparticles. The study exemplifies how a problem not suited to conventional engineering solutions can be solved with the help of the unique properties of a nanoparticle.
In 1977 the legislature designated an official state fossil. Fossils are the remains, impressions, or traces of animals or plants of past ages that have been preserved in the earth's crust by changing into rock. About 160 millions years ago, parts of Nevada were covered with a warm ocean. Ichthyosaurs, or huge "fish lizards," swam in these seas. These enormous fish could grow to be 70 feet long and were shaped something like a present day whale or porpoise. One of the skeletons found in central Nevada was 50 feet long and the skull bones weigh 1,200 pounds. The ichthyosaur was a dinosaur of the ocean. Nevada Historical Society
In test and measurement applications, engineers and scientists can collect vast amounts of data every second of every day. For every second that the Large Hadron Collider at CERN runs an experiment, the instrument can generate 40 terabytes of data. For every 30 minutes that a Boeing jet engine runs, the system creates 10 terabytes of operations information. For a single journey across the Atlantic Ocean, a four-engine jumbo jet can create 640 terabytes of data. Multiply that by the more than 25,000 flights flown each day, and you get an understanding of the enormous amount of data that exists (Rogers, 2011). That’s “Big Data.” Drawing accurate and meaningful conclusions from such a large amount of data is a growing problem, and the term Big Data describes this phenomenon. Big Data brings new challenges to data analysis, search, data integration, reporting, and system maintenance that must be met to keep pace with the exponential growth of data. The technology research firm IDC recently performed a study on digital data, which includes measurement files, video, music files, and so on. This study estimates that the amount of data available is doubling every two years. In 2011 alone, 1.8 zettabytes (1E21 bytes) of data were created (Gantz, 2011). To get a sense of the size of that number, consider this: if all 7 billion people on Earth joined Twitter and continually tweeted for one century, they would generate one zettabyte of data (Hadhazy, 2010). Almost double that amount was generated in 2011. Big Data is collected at a rate that approximately parallels Moore’s law. The fact that data is doubling every two years mimics one of electronics’ most famous laws: Moore’s law. In 1965 Gordon Moore stated that the number of transistors on an integrated circuit doubled approximately every two years and he expected the trend to continue “for at least 10 years.” Forty-five years later, Moore’s law still influences many aspects of IT and electronics. As a consequence of Moore’s law, technology is more affordable and the latest innovations help engineers and scientists capture, analyze, and store data at rates faster than ever before. Consider that in 1995, 20 petabytes of total hard drive space was manufactured. Today, Google processes more than 24 petabytes of information every single day. Similarly, the cost of storage space for all of this data has decreased exponentially from $228/GB in 1998 to $.06/GB in 2010. Changes like this combined with the advances in technology resulting from Moore’s law, undoubtedly fuel the Big Data phenomenon and raise the question, “How do we extract meaning from that much data?” 1. What is the value of Big Data? One intuitive value of more and more data is simply that statistical significance increases. Small data sets often limit the accuracy of conclusions and predictions. Consider a gold mine where only 20 percent of the gold is visible. The remaining 80 percent is in the dirt where you can’t see it. Mining is required to realize the full value of the contents of the mine. This leads to the term “digital dirt” in which digitized data can have concealed value. Hence, Big Data analytics and data mining are required to achieve new insights that have never before been seen. A generalized, three-tier solution to the “Big Analog Data” challenge includes sensors or actuators, distributed acquisition and analysis nodes, and IT infrastructures or big data analytics/mining. 2. What does Big Data mean to engineers and scientists? The sources of Big Data are many. However, the most interesting is data derived from the physical world. That’s analog data captured and digitized by NI products. Thus, you can call it “Big Analog Data”— derived from measurements of vibration, RF signals, temperature, pressure, sound, image, light, magnetism, voltage, and so on. Engineers and scientists publish this kind of data voluminously, in a variety of forms, and many times at high velocities. NI helps customers acquire data at rates as high as many terabytes per day. Big Analog Data is an ideal challenge for NI data acquisition products such as NI CompactDAQ, CompactRIO, and PXI hardware, and tools like NI LabVIEW system design software and NI DIAdem to organize, manage, analyze, and visualize data. A key advantage of these products is the ability to process data at the source of capture, often in real time. You can change this processing dynamically as needed to meet changing analytical needs. Embedded programmable hardware such as FPGAs offer extremely high-performance reconfigurable processing literally at the hardware pins of the measurement device. This allows the results of data analytics from back-end IT systems to actually direct a change in the type of processing that happens in NI products at the source of the data capture. Big Analog Data solutions strongly depend on IT equipment such as servers, storage, and networking for data movement, analytics, and archiving. You increasingly face challenges with creating end-to-end solutions that require a close relationship between DAQ and IT equipment. As an industry leader, NI is best suited to help you step up to Big Data challenges by providing solutions that are IT friendly and publishing data that is “Big Data-ready” for analytics on either in-motion or at-rest data. One thing is certain, NI is continually expanding its capabilities in data management, systems management, and collaborations with IT providers to meet the Big Data challenge. Dr. Tom Bradicich Dr. Tom Bradicich is an R&D fellow at National Instruments. Stephanie Orci is a product marketing engineer for DIAdem at National Instruments. “Big Data is Scaling BI and Analytics.” 1 Sep 2011. Web. 30 Aug 2012. Gantz, John, and David Reinsel. “Extracting Value from Chaos.” June 2011. Web. 8 Aug 2012. “Zettabytes Now Needed to Describe Global Data Overload.” 4 May 2010. Web. 31 Aug 2012. This article first appeared in the Q4 2012 issue of Instrumentation Newsletter.
American Civil War Buy custom American Civil War essay American civil war was started by a number of factors. There have been numerous debates for one hundred and thirty years over what caused the American civil war. The soldiers of the American South and the American North had one main goal of either fighting for or against the slavery. This means that slavery was the cause of the American civil war. Slavery was characterized by moral condemnation; the political leaders from either side divided the nation resulting to war (Hickman Para 2). The South American depended on the cotton as the principal economic activity. Slaves worked in the cotton firms. Any move to oppose slavery was received with hostility which eventually caused a war. The northern leaders were against the slavery. The American civil war was caused by the opposition of the North American and the South American on slavery (Barwolf 3). The political power of the federal government with the main offices in the Washington D.C. had numerous changes over the years. The increase of the populations led to the North American and the South American acquire more power with time. The southern states of America lost power in the sense that the population was redundant. This led to one portion of the nation becoming larger than the other. This is where sections of the nation were established. This was referred to as the sectionalism. The American federal government failed in harmonizing political power in the America leading to war. The southern states pushed for freedom from the central federal authority. The southern American believed that the state laws had calculative influence on the federal laws. This conflict on the federal laws and the state laws was referred to as the State’s Rights and was received warmly by the congress (Calore 25). The congress house and the presidents were unable to control the South America. South America was economically stable meaning that it was difficult to control it. Invention of the Cotton ginnery machine enhanced the agriculture business. The sectional crisis was difficult to restrain due to the strong economic factors in the South America. South America heavily relied on the slaves in the cotton farming. This reflected the need of a war of unification between the South America and the North America which was not economically reasonable. The political and economic issues led to the American civil war. The differences within the antebellum American society involved: slavery, western expansion, tariff reform, societal reform-renew and federal authority’s relations. The above lists issues made the Americans view their country in different perspectives. The political aspects that led to the war started soon after the American Revolution in the year 1782. There were intense arguments between the north and south. The contentious issues were on taxes called Tariff (Wilson 9). The South Americans felt it as a means of discouraging their progress; and they rejected the taxes leading to political confrontation. The taxing issue led to banks in the South paying higher interest rates than the banks in the North. The Southern Banks paid high interest rates to protect the Banks in the North from collapsing. The politicians of the Southerners argued that their voices were being ignored by the congress. They wanted to separate from the North America a condition referred to as Secede. The politicians and the economic change did nothing to solve the problem the Americans were experiencing. The constitution of America protected against seizure of property. Southerners referred to slaves as property. The slaves had a key role in the cotton industries. This led to an acute disagreement between the people of the North and the people of the South. Paul Finkleman indicated that the congress supported the North at the expense of the South America. He indicated that the constitution encouraged compromise and continued to relate the problems in America as devilish. The serious problems were related to immigration, race, gambling on the Indian reservations and the existence of the presidents. Other historians reflected the constitution as a compromise (Hickman Para 4). The government of America was practicing compromise for the good will of all members. Compromise in the constitution was based on positive environmental aspect. The constitution created further divisions within the political structure in the federal government. There were numerous political storms followed by intense bitter struggle. Compromise environment by the federal government did not solve the problems of the Americans. In fact, it resulted to bitter divisions between the South America and the North America (Wilson 12). The drafting of the constitution in the year 1787 provided the prevalence of compromise. Compromise had an intention of equalizing the people of South America and the people of North America. The word slavery never occurred in the American constitution. The federal government was willing to end the slave trade. This meant that the South America had to suffer from the effect. North America was in full support. South America referred to the constitution as a contract with the devil. The American Revolution on the abolishing of slavery in 1804 (Hickman Para 3) was a form of compromise that led to separation of various states. Thomas Jefferson feared crisis over slavery. He continued to assert that the crisis was a fire bell in the night. Thomas Jefferson insisted that all men had equal God given rights; this was a challenge for the American states that were willing to fight the British system. The abolitionists believed that all people were equal in God’s sight; they continued to assert that the souls of blacks and the whites were all the same in the eyes of God. The southerners were against the British system where the economy was based on the family farms. The South American economy was based on settled plantation system based on economy. Jefferson was critical in stating that there was an appending crisis on slavery. He insisted that America should stand on abolishing the slavery and support the British system (Wilson 13). The trans-Atlantic relationship was developed as a result of the American Revolution. Slavery was firmly and deeply valued in the American colonies. The South American population had forty percent slaves. New England provided America with ships used to convey slaves. Slave trade created business relationship in America and Britain. The industrial revolution brought in urbanization. The southerners did not like urbanization. This led to migrations between the Southern and the Northern American states. This led to the strengthening of the South’s defensive-aggressive political behavior. The people of the South and the people of North America were rivals. The people were resisting revolution and preferring being conservative. The revolution demanded equal rights of men. The tax plan was to develop the American industry and the global commerce. Southerners complained that the tax plan displayed favoritism to the Northerners (Hickman Para 4). The tax issue was hot and resulted to sectional crisis. The South militant and colonies were heavily against the tariffs and often hinted on withdrawing from the union. The militants of the North America were against slavery and the immoral acts which were obsolete. This led to the development of the British system. The issues of the 1690 to 1775 were remarkably similar to the development of the American empire in the years 1810 and 1860. The Southerners were farmers and the Northerners were industrialists. There was constant rivalry between the South and North America indicating a repeat of history over years. Virginia State had an influence in the slave trade. The Alabama platform by William L. Yancey indicated that the slave trade should not be restricted by the federal government and by the territorial governments. Virginia supported sovereignty where external influences were shunned away ((Wilson 7). Virginia fully supported the slave trade. John Brown organized a radical uprising to end the slave trade. The Massachusetts did not support the slave trade. Instead, it supported the slaves financially to settle in America. Massachusetts was on the opinion of government breaking from the slavery. The extensive influence called for no more slave states, and the enactment of the fugitive slave laws. Differences between Virginia and Massachusetts led to the nineteenth century civil war. The war was inevitable. The commonalities on ending the slave trade in the ongoing war led to the ending of the war (Hickman Para 5). The war equalized the Northern and the Southern states of America ending the hostility. The American war ended the slave trade. The South and North America were equalized as a result of the war. The South America relied heavily in the slavery in the cotton plantations. North America later became industrialized. The people of North America and the people of South America were rivals. The south was against modernization; and the Northern people were against the slave trade leading to a war. The war was inevitable. Buy custom American Civil War essay
YELLOW-EYED PENGUIN } Megadyptes antipodes DESCRIPTION: A medium-sized penguin, the yellow-eyed penguin is between 22 and 31 inches in length and weighs 8 to 19.6 pounds. It has pale yellow eyes and its head is capped by yellow, with black-centered feathers bordered by a bright yellow band extending from the eye to around the back of the head. HABITAT: Historical breeding habitat for this species was primarily in coastal forest and mixed-species scrub on slopes above landing areas. Very little coastal forest remains on the east coast of the South Island, where the bird must use scrub remnants, but forest remains the dominant habitat in breeding areas on other islands. The species forages over the continental shelf. RANGE: The yellow-eyed penguin is endemic to New Zealand, inhabiting the southeast coast of the South Island, Foveaux Strait, and Stewart, Auckland, and Campbell islands. Range Map MIGRATION: Adults are sedentary, but juveniles disperse north as far as the Cook Strait. BREEDING: Yellow-eyed penguins breed from August through March and lay eggs in shallow scrapes of leaves, grass, and twigs. Eggs are incubated for 39 to 51 days and chicks fledge at 106 to 108 days. LIFE CYCLE: Males may not breed until they are three to 10 years of age, but females usually reach maturity earlier in life. The bird’s average lifespan is 23 years. FEEDING: The yellow-eyed penguin feeds primarily on red cod, opal fish, sprat, and squid. THREATS: Yellow-eyed penguins are seriously threatened by food shortages resulting from sea-temperature changes driven by global warming. Populations have also suffered from loss of natural breeding habitat, high chick mortality due to predation by introduced mammals, and gillnet entanglement. POPULATION TREND: This species has declined by well over 50 percent in recent decades and now numbers fewer than 2,000 breeding pairs. Photo © Mike Danzenbaker, |HOME / DONATE NOW / SIGN UP FOR E-NETWORK / CONTACT US / PHOTO USE /|
PACRIM Black Day of the German Army Wargame Smallish game of the Entente’s Aug 1918 offensive which massed 500 tanks and fresh forces against the German’s Amiens salient, giving the allies a true break thru. U.Blennemann’95 Background: The German General Erich Ludendorff described the first day of Amiens as the “Schwarzer Tag des deutschen Heeres” (“the black day of the German Army”), not because of the ground lost to the advancing Allies, but because the morale of the German troops had sunk to the point where large numbers of troops began to capitulate. The Battle of Amiens began on 8 August 1918 and opened a phase of the Allied offensive later known as the Hundred Days Offensive. This ultimately led to the end of the First World War. Allied forces advanced over 11 kilometres (7 mi) on the first day, one of the greatest advances of the war, in which British Fourth Army was playing the decisive role. The battle is also notable for its effects on both sides’ morale and the large number of surrendering German forces. Amiens has a place in the history of war because was one of the first major battles involving armoured warfare and marked the end of trench warfare on the Western Front; fighting becoming mobile once again until the armistice was signed on 11 November 1918.
Fact Sheet: Mouth Sores A Sore Spot Mouth sores—they can be painful and irritating when you eat certain foods, when you bump them with your tongue or teeth or even when you try to smile. They can be caused by ill-fitting dentures, braces, the sharp edge of a broken tooth, bacteria, a fungal or viral infection or they can be a symptom of a disease or disorder. What are Mouth Sores? Mouth sores are swollen spots or sores in your mouth, on your lips, on your tongue or on the skin surrounding your mouth. There are several types of mouth sores, including: - Canker Sores: small, white areas of swelling or soreness that are surrounded by redness. Canker sores are not contagious. The cause of canker sores is uncertain, but some research suggests that immune system deficiencies, bacteria or viruses might be the culprits. Smoking, stress, trauma, allergies, certain types of foods (chocolate, caffeine or acidic foods) or vitamin deficiencies also may make you more susceptible to canker sores. Canker sores usually heal within one week, but they can recur. And, while there is no cure for canker sores, over-the-counter topical ointments or gels that provide temporary pain relief are available. Antimicrobial mouth rinses also may help to relieve pain. - Cold Sores: Often, people confuse canker sores with cold sores (also called fever blisters). Cold sores are groups of often-painful blisters filled with fluid that appear around the lips and sometimes under the nose. A person usually experiences his or her first cold sore infection in childhood. Once a person has had a cold sore, the virus stays in the body and can become active throughout the person's life. Cold sores are extremely contagious and usually are caused by herpes simplex virus. Infection may be brought on by decreased immune response, lack of sleep, stress or trauma. Cold sore blisters usually heal within seven to 10 days. While there is no cure for cold sores, non-prescription topical anesthetics for temporary pain relief are available. Your dentist or physician also might prescribe an antiviral drug to reduce the infection. - Leukoplakia: appears on the inner cheeks, gums or tongue and often appears as a thick, white-colored patch. It is associated with smoking or smokeless tobacco use. Other causes can include poorly fitting dentures, broken teeth, and cheek chewing. It is extremely important to report any signs of leukoplakia to your dentist or physician because an estimated 5 percent of cases of leukoplakia lead to cancer. Leukoplakia usually dissipates after the behavior causing the leukoplakia is ceased (stopping smoking or removing ill-fitting dentures, for example). Your dentist will check your healing progress every few months depending on the type, area of your mouth, and size of the affected area. - Candidiasis: a fungal infection that also is called oral thrush. Candidiasis appears as yellow-white or red patches in your mouth. Oral thrush is most common in newborns or in people whose immune systems are not functioning properly. People who have dry mouth or who are taking antibiotics also may be susceptible. It is found in people who wear dentures, as well. Candidiasis may form at the corners of the mouth with poor-fitting dentures. In these individuals, outbreaks of oral thrush can be prevented by cleaning the dentures and by removing them at night, if recommended by their dentist. If antibiotics cause the condition, your physician may consider reducing the dosage or changing the treatment. For those with dry mouth, saliva substitutes are available. Anti-fungal medications also are available. Oral cancer. Oral cancer often starts as a tiny white or red spot or sore. Sometimes oral cancer presents itself as a sore that bleeds easily or does not heal. Or, it can be a lump or a thick or rough spot. It can affect any area of the mouth, including the lips, gums, tongue, and hard or soft palate. If you have pain, tenderness or numbness anywhere in the mouth or on the lips that does not go away after a week, talk to your dentist. - Oral Cancer: most often affects people who use tobacco. If you use tobacco, talk to your dentist or physician about tobacco cessation treatment plans. Also, your dentist can check your mouth (and probably does) for oral cancer very easily and quickly during your routine cleaning and exam. Ask your dentist if he or she performs oral cancer screenings to be sure. Should I be Concerned? Mouth sores are common and rarely cause complications. Most go away in about a week, but it’s important to monitor any mouth sores you develop. If you are concerned or if the sore does not seem to be healing, contact your dentist or other medical professional for an examination.
As in other areas of Asia, the end of World Was II led to the emergence of a number of independent states. Jordan, Lebanon, and Syria, all European mandates before the war, became independent. Egypt, Iran, and Iraq, though still under a degree of Western influence, became increasingly autonomous. Sympathy for the idea of Arab unity led to the formation of the Arab League in 1945, but different points of view among its members prevented it from achieving anything of substance. The one issue on which all Arab states in the area could agree was the question of Palestine. As tensions between Jews and Arabs in that mandate intensified during the 1930s, the British attempted to limit Jewish immigration into the area and firmly rejected proposals for independence. After World War II, the Zionists turned for support to the United States, and in March 1948, the Truman administration approved the concept of an inde- pendent Jewish state, despite the fact that only about one-third of the local population were Jews. In May, the new state of Israel was formally established. To its Arab neighbors, the new state represented a betrayal of the interests of the Palestinian people, 90 percent of whom were Muslim, and a flagrant disregard for the conditions set out in the Balfour Declaration of 1917. Outraged at the lack ofWestern support for Muslim interests in the area, several Arab countries invaded the new Jewish state. The invasion did not succeed because of internal divisions among the Arabs, but both sides remained bitter, and the Arab states refused to recognize Israel. The war had other lasting consequences as well because it led to the exodus of thousands of Palestinian refugees into neighboring Muslim states. Jordan, which had become independent under its Hashemite ruler, was now flooded by the arrival of one million urban Palestinians in a country occupied by half a million Bedouins. To the north, the state of Lebanon had been created to provide the local Christian community with a country of its own, but the arrival of the Palestinian refugees upset the delicate balance between Christians and Muslims. In any event, the creation of Lebanon had angered the Syrians, who had lost it as well as other territories to Turkey as a result of European decisions before and after the war.
3.1 explain the features of an environment or service that promotes the development of children and young people Planning an environment for children and young people requires a significant attention. The term ‘environment’ is more than just a furniture or the activities. Environment should be: 1. Stimulating and attractive: as young children learn through using their senses, it means environments for them should be interesting and visually attractive 2. Well planned and organised: working with children requires great organizational abilities- babies need to be fed when thay ae hungry, toddlers get restless and older children need opportunities to explore. To accommodate this, we need to plan effectively and everyone within the setting needs to be organised. 3. Personalised and inclusive: this means thinking not only what is available to children, but also how things can be accessible to all children. 4. Encouraging and practicing participation: our bath House nursery is a welcoming place, where everyone feels that they are valued and that they belong. Settings, which encourage participation look for ways of helping children to learn about valuing others. 5. Regulatory requirements met: in our nursery this will include following EYFS, health and safety legislation and other policies which are understood by practitioners and implemented. 6. High-quality policies in place and followed: policies must be also reviewed, updated and evaluated to check for effectiveness 7. Varied: we need to vary the provision and maintain children’s interest. 8. Meeting individual and group needs 9. Providing appropriate risk and challenge: it is important for kids to experience situations in which they are learning to evaluate risk for themselves (for example I might encourage a child to climb a tree with a supervision, or to jump off a low wall) 10. Involving parents and carers appropriately: as parents play the most important role in their children’s development,...
Chinese tea had been traded for over 150 years. According to its development process, we could split it into 3 stages: 1. The first stage (1840 ~ 1886) This was the period when production and trades for Chinese tea rose, the area for tea production enlarged, and the amount of production increased. From statistical records, the total amount in production for the whole of China was around 50,000 tons and the total amount of tea exported was 19,000 tons in the 1840s. By 1886, the total amount of tea produced and exported reached 250,000 tons and 134,000 tons respectively - a 500% and 700% increase since 4 decades ago. At the time the total amount of tea exported accounted for 62% of all of China's exports. The rise in tea trade was mainly because the demand for tea by foreigners was growing rapidly. The signing of the Treaty of Nanjing in 1842 also forced the Qing government to open 5 ports for trade, which, together with the advent of fast transport boats, prompted tea trade's seaward development. But the rise in tea trade was also because China needed to balance its trade deficit. By around 1842, China was importing opium in vast amounts. In order to pay for the imports, the Qing government enlarged its export of silk and tea to bring money into China. These actions in turn increased tea selling. 2. The second stage (1887 ~ 1949) This was the period when Chinese tea trade began to decline. Since the Dutch and the British began planting tea in their colonies (around 1886), China's leading position in tea trading was eroded - and later even replaced. During that time, places such as Indonesia, India, and Sri Lanka became tea markets of the world. This was partly because these new tea makers were using machines to make tea. They were more efficient and more competitive than China, not just in quantity but also in quality - China was still producing tea with old methods at the time. Gradually, the British and Americans took away the black tea market from China; Japan took away the green tea market. All these factors minimised Chinese tea's competitiveness in the world and phased China out of the world market. Apart from that, wars continuously loomed in China, including The War of Resistance against Japan and the Chinese Civil War. These slowed economic development in China. Tea gardens were deserted. In 1949, for example, the amount of tea produced was only around 41,000 tons, whereas for tea exported, 9,000 tons. It was not until after 1950 when Chinese tea selling resumed. 3. The third stage (1950 ~ now) After the new Chinese Government was established and the political environment stablised, Chinese tea production recovered with the support of the new government. From 1950 to 1988, China (including Taiwan) expanded its tea plantation fivefold from 3,170,000 acreages to 16,300,000 acreages. The amount of tea produced was increased from 75,000 tons to 569,000 tons - an increase of more than 7 times. The total amount of tea exported rose almost 8 times from 26,000 tons to 206,000 tons. China broke the record in the total amount of tea produced in 1976, with 258,000 tons; and in 1983 it broke the record in the total amount of tea exported with 137,000 tons. By the time of this writing, China holds around 45% of the total tea production area of the world and remains in a leading position. As for its share of the tea market, China has increased its share from 11.9% in 1950 to 23% in 1988. China has also increased its total tea export from 6.5%, of all of China's export, to 20%. These have been the result of a stable government and its staunch support for tea development.
|Agent of Empire The East India Company The English East India Company was the most unique organization in British colonial history. Starting as a monopolistic trading body, the company became involved in politics and acted as an agent of British imperialism in India and Southeast Asia from the early 18th century to the mid-19th century. In addition, the activities of the company in China in the 19th century served as a catalyst for the expansion of British influence there. Formally called the 'Governor and Company of Merchants of London Trading into the East Indies', and later the 'United Company of Merchants of England Trading to the East Indies', the East India Company (EIC) was formed for the exploitation of trade with East and Southeast Asia and India. Queen Elizabeth I of England granted a charter to the EIC, awarding it a monopoly of the trade with the East. The EIC arose from a grouping of London merchants, ordinary city tradesmen and aldermen who were prepared to take a gamble in buying a few ships and filling them with cargo to sell in the East. At the end of the voyage, after the return cargo was sold, the profits would be shared amongst the share holders. This system was known as "joint-stock". Huge profits were made from the initial and difficult voyages to Southeast Asia, mainly from the sale of pepper acquired from the Sumatran and Javanese trading ports and sold in London. Soon, the EIC was building more and bigger ships and increasing the number of shareholders. The company was formed to share in the East Indian spice trade. This trade had been a monopoly of Spain and Portugal until the defeat of the Spanish Armada (1588) by England gave the English the chance to break the monopoly. Until 1612 the company conducted separate voyages, separately subscribed. There were temporary joint stocks until 1657, when a permanent joint stock was raised. The company met with opposition from the Dutch in the Dutch East Indies (now Indonesia) and the Portuguese. The VOC's policy was a monopoly on trade in spices, pepper and other commodities in the region., The Dutch virtually excluded company members from the East Indies after the Amboina Massacre in 1623 in which English, Japanese, and Portuguese traders were executed by Dutch authorities. After the massacre, and owing to the high costs of financing its voyages to the archipelago, the EIC turned its attention to India where it already had a factory at Surat. At that time, Surat was the main port of trade between India and Europe. Although the EIC turned to India, it did not completely withdraw from the Malay archipelago. It kept its factory at Bencoolen on the west of coast of Sumatra. At this time, the Indian market became more attractive for English goods. but the company's defeat of the Portuguese in India (1612) won them trading concessions from the Moghul Empire. The company settled down to a trade in cotton and silk piece goods, indigo, and saltpetre, with spices from South India. It extended its activities to the Persian Gulf, Southeast Asia, and East Asia. From the mid 1600's onwards, the EIC slowly began to acquire territory in India (Madras, Bombay and Calcutta). During this period also, the EIC was allowed to raise its own military force. In 1689, the EIC issued a formal declaration of its intention to be a territorial power in India, thus revising its earlier commercial aims. By the 1700s, the French were becoming involved in India too. The eighteenth century was a very important period in the EIC's history. The EIC expanded into Northern India and was increasingly involved in the China Trade. In London, the Company's office headquarters was improved to reflect its importance as a great company of the world. With the world's silver deposits being in the hands of enemy nations such as Spain, the EIC found it difficult to pay in silver for increased imports of Chinese tea and silk. It turned to Southeast Asian produce as another form of payment. The EIC also hoped to attract Chinese junks to an entrepot in the archipelago where the terms of exchange would be more favourable to the EIC. It was this that led to the establishment of Penang and Singapore and eventual British domination of the Malay peninsula. After the mid-18th century the cotton-goods trade declined, while tea became an important import from China. Beginning in the early 19th century, the company financed the tea trade with illegal opium exports to China. Chinese opposition to this trade precipitated the Opium War (1839-42), which resulted in a Chinese defeat and the expansion of British trading privileges; a second conflict, often called the "Arrow" War (1856-60), brought increased trading rights for Europeans. The original company faced opposition to its monopoly, which led to the establishment of a rival company and the fusion (1708) of the two as the United Company of Merchants of England trading to the East Indies. The United Company was organized into a court of 24 directors who worked through committees. They were elected annually by the Court of Proprietors, or shareholders. When the company acquired control of Bengal in 1757, Indian policy was until 1773 influenced by shareholders' meetings, where votes could be bought by the purchase of shares. This led to government intervention. The Regulating Act (1773) and Pitt's India Act (1784) established government control of political policy through a regulatory board responsible to Parliament. Thereafter, the company gradually lost both commercial and political control. Its commercial monopoly was broken in 1813, and from 1834 it was merely a managing agency for the British government of India. It was deprived of this after the catastrophe of the Indian Mutiny in 1857, and it ceased to exist as a legal entity in 1873. Write to the author: [email protected]
The Inka Empire stretched over much of the length and breadth of the South American Andes, encompassed elaborately planned cities linked by a complex network of roads and messengers, and created astonishing works of architecture and artistry and a compelling mythology--all without the aid of a graphic writing system. Instead, the Inkas' records consisted of devices made of knotted and dyed strings--called khipu--on which they recorded information pertaining to the organization and history of their empire. Despite more than a century of research on these remarkable devices, the khipu remain largely undeciphered. In this benchmark book, twelve international scholars tackle the most vexed question in khipu studies: how did the Inkas record and transmit narrative records by means of knotted strings? The authors approach the problem from a variety of angles. Several essays mine Spanish colonial sources for details about the kinds of narrative encoded in the khipu. Others look at the uses to which khipu were put before and after the Conquest, as well as their current use in some contemporary Andean communities. Still others analyze the formal characteristics of khipu and seek to explain how they encode various kinds of numerical and narrative data. Acknowledgments Preface Jeffrey Quilter Part One. Background for the Study of Khipu and Quechua Narratives 1. An Overview of Spanish Colonial Commentary on Andean Knotted-String Records Gary Urton 2. Spinning a Yarn: Landscape, Memory, and Discourse Structure in Quechua Narratives Rosaleen Howard Part Two. Structure and Information in the Khipu 3. A Khipu Information String Theory William J Conklin 4. Reading Khipu: Labels, Structure, and Format Marcia Ascher 5. Inka Writing Robert Ascher Part Three. Interpreting Chroniclers' Accounts of Khipu 6. String Registries: Native Accounting and Memory According to the Colonial Sources Carlos Sempat Assadourian 7. Woven Words: The Royal Khipu of Blas Valera Sabine P. Hyland 8. Recording Signs in Narrative-Accounting Khipu Gary Urton 9. Yncap Cimin Quipococ's Knots Jeffrey Quilter Part Four. Colonial Uses and Transformations of the Khipu 10. "Without Deceit or Lies": Variable Chinu Readings during a Sixteenth-Century Tribute-Restitution Trial Tristan Platt 11. Perez Bocanegra's Ritual formulario: Khipu Knots and Confession Regina Harrison Part Five. Contemporary Khipu Traditions 12. Patrimonial Khipu in a Modern Peruvian Village: An Introduction to the "Quipocamayos" of Tupicocha, Huarochiri Frank Salomon 13. The Continuing Khipu Traditions: Principles and Practices Carol Mackey Contributors Index Series: Joe R. and Teresa Lozano Long Series in Latin American and Latino Art and Culture Number Of Pages: 391 Published: 15th August 2002 Publisher: University of Texas Press Country of Publication: US Dimensions (cm): 22.9 x 15.2 Weight (kg): 0.51
Teacher/Student Learning Packet - People lived in this area for thousands of years. - People lived here because the river made it possible to live in this desert. - The building of Hoover Dam changed the area in large and important ways affecting people. - People still live in this area because of the river, most in ways very different than before Hoover Dam was built. - Hoover Dam was a significant accomplishment for human beings. Human beings have lived along or near the Colorado River for thousands of years. The evidence for this is the hundreds of habitation sites found throughout the Las Vegas, Lake Mead, Hoover Dam and Lake Mojave area, many of which have been dated with radiocarbon, argon, or by other measures. Some of the earliest peoples have been called "desert culture" people, basket makers, or pueblo people. It is important to remember that these names are just a convenience for those who study people and their way of life. These are not names that the people have given themselves. The descendants of the early peoples we know by more familiar names -- Paiutes, Hopi, Mojave, Yuma, Havasupai, Hualapai. These names are the names which the people have given to themselves -- for example, Havasupai means "the people of the blue-green water". The first non-native people in the Colorado River area were Spanish conquerors (conquistadores), who were looking for gold, silver or other wealth. Ulloa was the first to see the mouth of the Colorado in 1539. Cardenas, who traveled with Coronado from Mexico in 1540, was the first to see the Grand Canyon. Some of these Spanish soldiers stayed or returned to live in the area, which is why the Spanish language is so widely used today in California, Arizona, New Mexico and Nevada. The sharing of languages was not the only effect of contact between Spanish soldiers and native people. Foods, diseases, tools, horses, and a great blending of cultures was the ultimate result of the initial, sometimes unfriendly, contacts. Some two hundred years after the conquerors came, Spanish priests, such as Fathers Dominguez and Escalante in 1776, entered and explored parts of the Colorado River basin as they looked for routes of travel between their missions. It was Father Garces, also in 1776, who named the river, Rio Colorado, "river colored red." Jedediah Smith and other trappers came looking for beavers in 1826, gold miners on the way to California followed in 1849, and Mormon settlers arrived in Las Vegas in 1855. Las Vegas, which is Spanish for "the meadows," did not become a town until 1905. River explorers and mappers first came in January, 1858, under the leadership of Lt. Joseph Christmas Ives, who came up the Colorado by steamboat from the Gulf of California. He traveled as far upriver as possible to Black Canyon, the eventual site of Hoover Dam. John Wesley Powell and his men floated down the river, starting on the Colorado's main tributary, the Green River. From Green River, Wyoming, he and his men rowed all the way through the Grand Canyon. Powell made a second trip down the Colorado in 1871. The river explorers were hoping to find that the Colorado could be used as a route of travel and commerce, but because of the wide fluctuations in the amount of water from season to season, they concluded that it could not. The dam builders came in 1931. A handful of men did the planning and designing of the dam. There were another 16,000 workers who did the actual building. Many of these men had families, wives and children, who came with them. Why would so many people come out to what was, at the time, a raw, undeveloped and dangerous place to live? Essentially, because of the terrible economic times, the Depression, that was then affecting almost every part of the United States. People came from every part of the country to work at Hoover Dam. The way of life so many people had to endure -- camping out in tents or shacks along the Colorado River, some for as long as three years, without clean drinking water, toilets, or protection from the extreme weather -- makes these "common" people the real heroes of Hoover Dam. 96 men were killed in industrial accidents while building the dam. Several dozen others died from the heat or carbon monoxide poisoning while on the job, and hundreds of other people, wives and children of the workers, died from heat, polluted water or disease. Because of Hoover Dam, the Colorado River was controlled for the first time in history. Farmers received a dependable supply of water in Nevada, California and Arizona. Los Angeles, San Diego, Phoenix, Las Vegas and a dozen other towns and cities were given an inexpensive source of electricity, permitting population growth and industrial development. Grades 1 - 4 - Make a mural showing all of the different people who have lived in the Lake Mead/Las Vegas area or traveled through. - Make a model (clay, paper or cardboard) of an Indian village, an army fort or a dam builder's river camp. - Write a story such as: one day in the life of a river explorer, a Native American Indian, or ??? (your choice). - Write a poem or song: for example -- We built the dam! or Life on the River. or (your choice). Grades 5 - 8 - Research: Do any descendants of early settlers in the Las Vegas valley live as their ancestors lived? Explain. - Write: The lives of many people in the Lake Mead/Las Vegas area are possible because of Hoover Dam. From a population of several thousand in 1930, it has grown to over 1,000,000 today. How can people prepare to meet the challenges that such a growing population brings? - Create: A painting of a local scene or person (current or historic); a reproduction of an artifact (Native American or Depression era) in clay, wood, stone or other medium; a song or poem about life today or yesterday on Lake Mead, in the desert, or by the river. - Research: Which of your favorite foods were also enjoyed by native people in prehistoric times or by the conquistadores? Colorado River Country by David Lavender. New York: E. P. Dutton, Inc., 1982. Hoover Dam, An American Adventure by Joseph E. Stevens. Norman and London: University of Oklahoma Press, 1988 Lake Mead and Hoover Dam, The Story Behind the Scenery by James C. Moxon. Las Vegas: KC Publishers, Inc., 1980 (fifth printing, 1993). The Colorado by Frank Waters. Athens, Ohio: Swallow Press Books, 1946 (reprinted, 1984). Last Reviewed: 9/15/2004
Although we typically think of Immanuel Kant for his moral philosophy, he had many other ideas concerning political thinking that emphasized anti-war policies (Brook). Likewise, Karl Marx believed in peace; however, he had different ideas about how to accomplish this. Kant’s political philosophy focused on the public sphere, which describes a group of private people who come together as a group to reason critically. This philosophy deals more with ownership and was socially relevant in periods of time when landowners, educated men and women, and property were of extreme importance to societal standing. Today, however, the public sphere acts as more of a mode of publicity for political figures. Marx’s political philosophy focused on the commons, which is defined as the resources that are available to all members of a society and the belief that everyone has equal claim to them. Marxism primarily deals with this need to share and was socially relevant at the time when capitalism was of extreme importance. Today, this philosophy is most closely reflected in socialist governments as you can see in this Kant essay. The public sphere is the most useful principle for thinking politics today. Although the situation that Karl Marx created is ideal, it has been shown to work very poorly in actual practice except for the Canadian socialist government. A majority of the democratic world’s governments today operate based on the thoughts and opinions of the public sphere. Since the public spheres in many countries are now defined as the politicians in power rather than the educated citizens, this method of thinking politics may not be the most effective for everyone; despite this, many systems of government have evolved to work this way. In many senses, Immanuel Kant and Karl Marx had opposing philosophical beliefs. Kant believed that the power of the state should be limited in order to protect people from the government (Strauss). The political system that he recommended is very similar to the Senate system used in the United States and the parliamentary system used in Great Britain. Constitutional governments such as these share a balance of power between the national government and small provinces that are represented by normal people. In addition, there is a balance of power between the government and courts; the courts ultimately decide who is innocent or guilty based on the nation’s and state’s constitution and this decision is independent of politicians directly involved in the national government. Kant’s system of belief is highly relevant to the modern world. The public sphere gives ordinary people influence and power that are usually impossible in other political systems. Kant acknowledged that direct democracies will not work for modern governments; when examining how countries are run, this concept is obvious. What is the best solution for the greatest number of people to be necessarily fair or ethical to the group of people in the minority; parliaments and republics overcome this unfairness. It is obvious in this Immanuel Kant essay, that while a majority of civilians are still able to express their opinions, there are many checks and balances put in place to ensure that governmental action is just. Karl Marx’s policy of focusing on the commons contradicts these beliefs. In order to truly share resources in a fair way, many people need to give up their rights to cast their votes and opinions. Karl Marx’s political theory, which is commonly known as Marxism, aims to promote equality and capitalism by allowing for equal sharing and production of goods. In doing so, this eliminates class conflict because there is no longer an upper class or a middle class; everyone has equal access to everything (Service). Although this form of government seems beneficial, it has led to catastrophes like communism in the Soviet Union. It is impossible for everyone to truly share everything equally because this would leave no leader or group of people to dictate how goods should be shared. In response to this, the Soviet Union has been led by many cruel dictators such as Joseph Stalin. Under Stalin’s power, a majority of the goods produced in his country solely benefited him and his lackeys, which resulted in harsh conditions for the civilians. In addition, he took extreme advantage of his power and committed terrible acts on his people that he justified as law enforcement (Wheatcroft). Despite this incident, socialism can work in isolated instances, such as the Canadian government. However, it requires an unselfish governing body that is truly invested in the well-being of their people to be able to do so. The theory of Marxism itself came to be when Karl Marx realized that pure capitalism is unable to sustain itself; when profit falls in a capitalist setting, wage and social benefits will also fall. In addition, military power will decrease, which is detrimental to societies that rely on a constant need for protection in order to survive. Marxism solves these problems by controlling wage, profit, social benefits, and military. If a government has a smaller amount of exports, they will still be able to provide benefits for the civilians by adjusting the budget accordingly. Canada has implemented this policy; as a consequence, Canadian citizens benefit from centralized healthcare. Despite the relevance of the commons in the modern Canadian government, many nations that have followed Marx’s philosophy have defaulted into communist governments in which a single ruler has power over all the people (Gleason). This is detrimental to these people’s own rights and freedom and does not realize the extent of the terrible situation they are in because their rulers don’t allow them to. Those who are able to understand the problems with their government dream of having a political system governed by the public sphere where they are able to contribute to the political ideas of their own nation. Although both Kant’s and Marx’s political philosophies have academic value, their success depends on real-world situations. In the modern world, it is common to find political systems governed by the public sphere; these systems are fully functional, last for a long time, and please a greater amount of people. Civilians are the government; they are able to be in constant contact with their representatives who will be fired if they do not listen to public opinion. In contrast to Kant’s political philosophy, too much is guided by the commons. In the theory, the idea of sharing all goods and natural resources seems like a very peaceful society in which the best interests of all the citizens are protected. However, we have seen in many historical situations that this is not the case. When people are given too much power, they will take advantage of it and without a leader or group of leaders, a Marxist government cannot work. It is important to understand Kant’s and Marx’s theories in terms of the modern world in this Karl Marx essay. While there are examples of both concepts working in the 21st century, it is important to understand the specifics of what is working and what’s not. We can then use this information to infer governmental trends that will occur in the future. It seems like many communist dictators are falling out of power in favor of political philosophies that involve the public sphere. This indicates that Kant’s belief in the public sphere is a trending thought pattern in the modern world. - Brook, A. Kant and the Mind. Cambridge: Cambridge University Press, 1994 - Gleason, Abbott. A Companion to Russian History. Wiley-Blackwell, 2009 - Service, Robert. Comrades: Communism: A World History. Pan MacMillan, 2008 - Strauss, L, Cropsey, J. Immanuel Kant, in History of Political Philosophy. University of Chicago Press: Chicago and London, 1987 - Wheatcroft, S. G.; Davies, R. W.; Cooper, J. M. Soviet Industrialization Reconsidered: Some Preliminary Conclusions about Economic Development between 1926 and 1941. Economic History Review, 1986
The behavioural, emotional and cognitive characteristics of obsessive-compulsive disorder (OCD). Characteristics of OCD - Behavioural – compulsions usually decrease anxiety, avoid situations that trigger anxiety. - Emotional – intense anxiety, depression, guilt and disgust. - Cognitive – obsessive thoughts, cognitive strategies, e.g. prayer, and self-insight. Obsessions are typically persistent and uncontrollable thoughts, images, impulses, worries, fears and doubts (or a combination of these). Additionally they are intrusive, unwanted, disturbing and significantly interfere with normal life, making them incredibly difficult to ignore. Sufferers know that their obsessional thoughts are irrational, but they believe the only way to relieve the anxiety caused by them is to perform compulsive behaviours, often to prevent perceived harm happening to themselves or more often than not, to a loved one. Instances of Obsessions - worrying that you or something/someone/somewhere is contaminated. - worrying that everything needs to be arranged symmetrically or at perpendicular angles so everything is ‘just right’ - worrying about causing physical or sexual harm to yourself or others - unwanted or unpleasant sexual thoughts and feelings, including those about sexuality or fear of acting inappropriately towards children - intrusive violent thoughts - worrying that something terrible will happen unless you check repeatedly - having the unpleasant feeling that you are about to shout out obscenities in public When someone is affected by Obsessive-Compulsive Disorder the natural response is to fight these horrible obsessional thoughts with purposeful mental or physical rituals and behaviours – compulsions. Compulsions are the repetitive physical behaviours and actions, or mental thought rituals, that are performed over and over again, in an attempt to relieve the anxiety caused by the obsessional thoughts. But unfortunately, any relief that the compulsive behaviours provide is only temporary and short lived, and often reinforces the original obsession, creating a gradual worsening cycle of the OCD. These behaviours involve repeatedly performing purposeful and meaningful actions in a very rigid and structured routine, specifically in relation to the obsessional thoughts, usually in an attempt to prevent perceived danger or harm coming to themselves, or to a loved one. In most cases the person recognises their compulsive actions are senseless and irrational, but none-the-less feels bound to carry them out. This is not for pleasure, but to feel they have ‘neutralised’ the perceived threat from the obsessional thought. Often a person with OCD will feel a heightened sense of responsibility to perform the neutralising behaviour, simply because they feel doing so will prevent harm coming to themselves or loved ones. What’s more they sometimes have an overwhelming urge to obtain that ‘just right’ feeling with no other reason than to feel comfortable. Instances of Compulsions - excessive washing of one’s hands or body - excessive cleaning of clothes or rooms in the house - checking that items are arranged ‘just right’ and constantly adjusting inconsequential items, such as pens on a table, until they are aligned to feel ‘just right’ as opposed to looking aligned - mental rituals or thought patterns such as saying a particular phrase, or counting to a certain number, to ‘neutralise’ an obsessional thought - avoiding particular places, people or situations to avoid an OCD thought - repeatedly opening and sealing letters / greetings cards that one has just written, maybe hundreds of times - constant checking of light switches, handles, taps, locks etc to prevent perceived danger from flooding, break in, gas leak or fire - saying out loud (or quietly) specific words in response to other words - avoidance of kitchen knives and other such instruments A compulsion can either be overt (i.e. observable by others), such as checking that a door is locked or covert (an unobservable mental act), such as repeating a specific phrase in the mind. OCD generally manifests itself in four different ways: checking, contamination, hoarding and intrusive thoughts. It is diagnosed when the behaviour consumes excessive time, causes distress and anxiety and interferes with a person’s ability to function adequately. Checking often occurs so the person feels they are preventing something terrible happening. For example, checking appliances could be for fear of a house fire. Contamination can be mental or physical. A person might compulsively brush their teeth for fear of mouth disease, or avoid public toilets to avoid people’s germs.Hoarding is the inability to discard useless or worn-out possessions. Intrusive thoughts occur without the person wanting them, and are repetitive, disturbing and repugnant. People with intrusive thoughts are the least likely to act on them because the thoughts are so distressing. For example, a parent might have intrusive thoughts about abusing their child.
You can always enter numbers in octal, decimal, or hexadecimal in gdb by the usual conventions: octal numbers begin with ‘0’, decimal numbers end with ‘.’, and hexadecimal numbers begin with ‘0x’. Numbers that neither begin with ‘0’ or ‘0x’, nor end with a ‘.’ are, by default, entered in base 10; likewise, the default display for numbers—when no particular format is specified—is base 10. You can change the default base for both input and output with the commands described below. set input-radix 012 set input-radix 10. set input-radix 0xa sets the input base to decimal. On the other hand, ‘set input-radix 10’ leaves the input radix unchanged, no matter what it was, since ‘10’, being without any leading or trailing signs of its base, is interpreted in the current radix. Thus, if the current radix is 16, ‘10’ is interpreted in hex, i.e. as 16 decimal, which doesn't change the radix. set radixsets the radix of input and output to the same base; without an argument, it resets the radix back to its default value of 10.
A typical galaxy contains billions and trillions of stars, like our closest neighbor, the Sun. In the field of Cosmology, a single galaxy is considered to be the unit quantity to study the vast and amazing Universe. With bigger and better telescopes, we are able to observe and catalog millions of such galaxies and quasars (very bright “active” galactic centers). If we were to observe the Universe around us, at large distances, more or less the distribution of these galaxies appears to be isotropic — which means that there is no preferred direction in the Universe. Another way we know this isotropy exists is because we are surrounded by uniformly-distributed light from the early Universe in all directions (precisely speaking, microwave radiation known as the Cosmic Microwave Background (CMB)). Also, there is no reason to believe that we are in any special place in the Universe, as our galaxy is no more than a speck of dust in the vast multitude of galaxies just like any other one. If we assume that we are not in any special location in the Universe to observe the isotropy, meaning that there is no preferred location in the Universe, then all the locations are treated on equal footing. This is also called the Copernican principle. Observationally, it is difficult to “prove” this assumption. We observe on the past lightcone at a single cosmological time, which means that we can only directly test isotropy based on our worldline (that which traces our path in the spacetime). Since isotropy about all galaxy worldlines implies homogeneity, and we cannot observe all the worldlines, we adopt a Copernican Principle (i.e. our worldline is not special) and we deduce the so-called “Cosmological Principle” on the basis of isotropy we observe. When we combine this principle with a theory of gravity like general relativity and bring in the ingredients that we know based on fundamental physics, we get the standard model of the Universe. The most widely-accepted model of the Universe is called the Lambda CDM model. This model is based on the existence of two exotic ingredients that we have not directly observed in a laboratory, namely dark matter & dark energy. According to this model, only 4% of the observed Universe is made of the matter and energy we know of, and the rest of the 96% is unknown. Dark matter is an elusive, (almost) all-pervading, a non-baryonic substance that interacts only through gravity. Direct detection and laboratory study of dark matter is, therefore, an open challenge as is to find a suitable dark matter candidate from particle physics. This situation is reminiscent of the quest for the aether medium in the past century. Dark energy, on the other hand, could be motivated from a cosmological constant (the purported biggest blunder of Einstein that came to the rescue). Unfortunately, the cosmological constant is so small in comparison (of the order of ~10−120) to the expectation from another successful field of physics known as the Quantum Field Theory. This discrepancy is even called the worst theoretical prediction in the history of physics!. As we know, the development of science is based on direct observations of the hypothesis formed. As Richard Feynman nicely put it — no matter how beautiful a theory is, if it does not match with the experiments, it is wrong. Apart from this uncomfortable situation of not knowing the major part of the Universe, there are also few observational inconsistencies that irk cosmologists in whole-heartedly supporting the standard model of the Universe. We also know that general relativity is not reconciled with quantum mechanics (another observationally strong field of physics). So, in principle we could be looking at alternative models of gravity as well, to look beyond general relativity (which is leading us to the standard model). Hence, there is a growing interest in extending cosmology beyond the standard Lambda CDM model. It is, therefore, essential that any observational study of the Universe (like any other branch of science) is independent of the biases we may have from learning different theoretical models. So, development of model-independent observational techniques can lead us to a better understanding of the Universe by pointing towards exploring superior theoretical models. At least, the observational analysis should be open to testing a multitude of potential theoretical models. For example, we observationally notice all the galaxies are moving away from us based on the shift in the frequency of light known as redshift. This is why we believe that the Universe is expanding. To calculate the distance of these galaxies are from us, we need to assume some theoretical model of the Universe which dictates its evolution a priori. For e.g., we think the Universe is expanding at a rapid rate in recent times (accelerating) only because we assume the standard model scenario. The Universe could as well have been expanding linearly since its inception and the observational data of the so-called standard candles (supernovae) can still be consistent. Theoretically speaking, the Universe could be of different geometries containing ingredients in various proportions having a bunch of possible evolution scenarios along with different alternative theories of gravity. Therefore, any code/calculation that analyses observations should have the possibility of analyzing the data through many different theoretical models. The code “correlcalc” is an effort in this direction. It is made to analyze the galaxy catalog data containing redshift and directional positions of all the observed galaxies, in light of various possible theoretical models. At the beginning of this article, we discussed how the Universe roughly appears the same in all directions at large distances. At smaller scales, (of the order of up to 100Mpc) gravity pulls matter closer together forming some structures. All the structures in the Universe we observe today are put together by the gravitational pull. So, by observing how different the Universe appears at large scales in comparison to a totally random distribution, we can extract some key information and statistical parameters. These help in putting constraints on the theories of gravity and evolution of structures etc. In the figure below, notice how the filaments, voids and other structures can be seen in the observed patch of sky (at a particular redshift slice – shown on the right). If we are assuming an alternative model for the evolution of the Universe (with or without different gravity models) and analyze the data, if extracted parameters are self-consistent with the alternative model, it makes a much more compelling case as the standard model (at least from this set of observations). In a sample study conducted based on the correlcalc code using data taken from Sloan Digital Sky Survey (SDSS), we concluded that observations are as consistent with a linearly coasting model of the Universe in the same way as they are for the standard model. Hence we strongly believe more efforts are needed to (in)validate the potential alternative models based on observational grounds and more such tools need to be developed for bias-free observational data analysis of the Universe. These findings are described in the article entitled correlcalc: A ‘generic’ recipe for calculation of two-point correlation function, recently published in the journal Astronomy and Computing.
What is a synapse? When a nerve impulse reaches the synapse at the end of a neuron, it cannot pass directly to the next one. Instead, it triggers the neuron to release a chemical neurotransmitter. The neurotransmitter drifts across the gap between the two neurons. On reaching the other side, it fits into a tailor-made receptor on the surface of the target neuron, like a key in a lock. This docking process converts the chemical signal back into an electrical nerve impulse.
The Ocean Observing System for Climate provides information about the state of the world ocean and its regional variations to address important societal needs related to the Earth's climate. To this end, the ocean observing system strives to deliver continuous instrumental records for global analyses of: Sea Surface Temperature and Surface Currents: The ocean communicates with the atmosphere via its surface. Because the ocean covers 71% of the Earth's surface, Sea Surface Temperatures (SSTs) dominate surface temperature, which is a fundamental measure of climate change. SSTs also control sequestration of heat and CO2 in the ocean, amounts and patterns of precipitation, and large scale circulation of the atmosphere which affects weather. SSTs influence tropical cyclones, and patterns of climate variability, such as El Niño. Surface currents, which transport large amounts of heat from the tropics to subpolar latitudes, exert large influence over SSTs. Ocean Heat Content and Transport: The ocean is the Earth's greatest reservoir for heat that accumulates as a result of the planetary energy imbalance caused by greenhouse warming. Heat absorbed by the ocean raises ocean temperatures, including sea surface temperatures. Quantifying heat sequestration via measurement of ocean temperature is, therefore, critical to predicting global temperature rise attributable to greenhouse gas emissions. Increased storage of heat leads to thermal expansion of water, causing an increase in sea level with profound impacts on coastal communities and ecosystems. Air-Sea Exchanges of Heat, Momentum, and Fresh Water: The ocean, which stores the bulk of the sun's energy absorbed by the planet, communicates with the atmosphere via exchanges across the ocean surface. These air-sea fluxes need to be quantified in order to identify changes in forcing functions driving ocean and atmospheric circulation, which in turn control the redistribution of heat, thereby influencing global and regional climate. Evaporation of water from the ocean is an essential component of the global water cycle, which is also influenced by climate change. Sea level rise, caused by warming and expansion of ocean water and by melting and runoff of land-based ice, is both an impact and a diagnostic of the Earth's energy imbalance caused by greenhouse warming. Rising sea levels have profound, regionally varying impacts on coastal communities and ecosystems. Quantification of sea level rise provides a sensitive measure of how much heat is sequestered in the ocean as a consequence of greenhouse warming. Ocean Carbon Uptake and Content: Ocean uptake of carbon dioxide (CO2) results in sequestration of about a third of all anthropogenic CO2 emissions. As such, the ocean constitutes a large sink for the greenhouse gas most responsible for global climate change. In addition, uptake of CO2 results in acidification of the ocean, with potentially significant impacts on marine biota. Observations are necessary, also, to better understand how cycling among carbon reservoirs varies on seasonal-to-decadal time scales.
- University of Virginia - Physics Department A Physical Science Activity 2003 Virginia SOLs - construct pulley systems using one or two pulleys; - understand the relationship between the arrangement of the pulleys and the effort required to lift a load; - learn various applications of pulleys in everyday life. - 2 broomsticks - 1 meter of thin rope - Ask for two volunteers from the class to help. - Tie one end of the rope to a broomstick. Wrap the other end once around the second broomstick. - While the two volunteers are trying to hold the broomsticks apart, attempt to pull them together using the free end of the - To make it easier, wrap the free end of the rope around the first stick another time and pull. - Again, to make it easier, wrap the rope around the second stick another time and try to pull them together. - This system works similarly to a pulley--therefore, as the number of wraps increases, so does the mechanical advantage provided to you. Ask the class to explain this analogy. The pulley is a simple machine that consists of a grooved wheel and a rope. Like a lever, it provides a mechanical advantage in lifting a heavy load. There is a direct relationship between the number of ropes that form the pulley and its resulting advantage. There are two basic types of pulleys. When the grooved wheel is attached to a surface it forms a fixed pulley. The main benefit of a fixed pulley is that it changes the direction of the required force. For example, to lift an object from the ground, the effort would be applied downward instead of pulling up on the object. However, a fixed pulley provides no concrete mechanical advantage. The same amount of force is still required, but just may be applied in another direction. Another type of pulley, called a movable pulley, consists of a rope attached to some surface. The wheel directly supports the load, and the effort comes from the same direction as the rope attachment. A movable pulley reduces the effort required to lift a These two types of pulleys can be combined to form double pulleys, which have at least two wheels. There are various combinations which can result in a double pulley, some of which will be explored in the student experiment. As the pulley becomes more complex, the total lifting effort decreases. For example, a system consisting of a fixed pulley and a movable pulley would reduce the workload by a factor of two, because the two pulleys combine to lift the load. In this experiment, the students will use a spring scale to record the required effort. The spring scale itself has a small amount of mass and so will contribute a small force to the effort because of gravity. To determine this force, the students should hang one spring scale from another and note the reading from the top scale (approximately 0.5 N). This amount should be added to the forces recorded throughout the procedure when the spring scale is being pulled towards the ground (in a fixed pulley). Remember to explain this to the students before they begin the activity. To print out the Student Copy only, - Gravel (240 g) per bottle - 2 Plastic bottles with screw eye in lid, 500 ml (can be ordered from Delta Education Foss Catalog (1-800-258-1302); item number 420211232) - Hang one spring scale from another. Read the measurement from the top scale (the top of the curved line) and record it on the data table. - Fill each plastic bottle with gravel. Check the mass on a scale; if it is not 240 g, add water until the mass is corrected. - Cut the cardboard into two pieces. One should be 30x50 cm, the other 20x50 cm. - On the smaller piece of cardboard, trace the size of the lid of a plastic bottle. Cut a square around the shape, and then cut the shape of the lid from the cardboard. This square should be placed on the bottle before the lid is screwed - Tape two pieces of white paper on the larger piece of cardboard, one above the other. On the lower sheet, use a ruler to mark dashes at 5 cm, 10 cm and 15 cm from the bottom of the cardboard. Draw lines across the paper at these - Clip the binder clip to the end of the ruler, so that the metal rings face towards you. - Tape the ruler to a desk or table so that the clip hangs over the edge of the surface. Set a book on the ruler to ensure that it doesn't slide off. - Fold 5 cm from the end of the rope over into a loop and secure the loop with masking tape. Repeat on the other end of the rope. - Construct a fixed pulley. First, attach the pulley wheel to the binder clip on the edge of the surface. Thread the rope through the pulley. From one end of the rope, hang the load. From the other, hang the spring scale. - Lift the load and record the required effort. Add this to the weight measured in step 1 of the spring - Now construct a movable pulley. To do this, attach one end of the rope to the binder clip. Thread the rope through the pulley wheel, and attach its other end to the spring scale. Hang the load from the wheel itself. For a movable pulley, the load should be hanging from the center of the rope, and the spring scale should be above it. - Lift the load and record the required effort. Add to this the weight of the spring scale measured in - Retrieve the large sheet of cardboard with the white paper on it. Tape this sheet to the side of the table or counter, directly underneath the binder clip. - Set up a fixed pulley system as you did in step 5. - Hang the load from one end of the rope. Put a pencil through the loop at the other end (instead of the - Apply effort to the pencil so that the load hangs at the lowest (5cm) line drawn on the white paper. Use the cardboard collar on the load to judge this. - Keep the pencil in the rope loop for the rest of the experiment. While the load is positioned at the 5cm line, draw a line on the white paper with the pencil where it - Drag the pencil down the paper, drawing a line while lifting the load so that the collar reaches the top - Release the load. Measure the distance traveled by the effort (the pencil line on the paper). Record this on the data table. - Set up a movable pulley system as in step 8. Hang the load from the pulley. Again, replace the spring scale with a pencil. - Repeat steps 12-14. With this setup, however, the pencil will move up along the paper. - Record the distance traveled on the data To print out the Data Sheet only, Force of spring scale itself: Direction of Effort Effort (N) + spring scale force 1. Is there a difference in the effort required to lift a load when using a fixed pulley and a movable pulley? 2. Did the effort distance differ when using the two types of pulleys? Why or why not? 3. Describe a situation in which it would be to your advantage to use a pulley instead of another type of simple Students with Special Needs Some students may have difficuilty maniupulating the objects necessary for the assembly of the apparatus. This activity can be done with partners or in Click here for further information on laboratories with students with special needs. Data sheet to be completed during the laboratory.
WHERE C STANDS Let us now see how C compares with other programming languages. All the programming languages can be divided into two categories: These languages have been designed to give a better programming efficiency, i.e., faster program development. Examples of language falling in this category are FORTRAN, BASIC, and PASCAL………….. These languages have been designed to give a better machine efficiency, i.e., faster program execution. Examples of languages falling in this category are Assembly language and Machine language. C stands between these two categories. That’s why it is often called a Middle level language, since it was designed to have both: a relatively good programming efficiency (as compared to Machine oriented languages) and relatively good machine efficiency (as compared to Problem oriented languages). C IS A MIDDLE-LEVEL LANGUAGE C is often called a middle level computer language. This does not mean that C is less powerful, harder to use, or less developed than a high-level language such as BASIC. Or Pascal, nor does it imply that C has the cumbersome nature of assembly language. Rather, C is thought of as middle – level language because it combines the best elements of high level languages with the control and flexibility of assembly languages. As middle – level language, C allows the manipulation of bits, bytes and addresses. Despite this fact, C code is also very portable. Portability means that it is easy to adapt software written for one type of computer or operation system to another type. High Level Languages are: Ada, Modula-2, Pascal, COBOL, FORTRAN, and BASIC Middle Level Languages are: Java, C, C++, Forth, and Macro-assembler Low Level Languages are: Assembler. Computers can understand only machine language instructions. Therefore, the program written in any other language should be translated into machine language. There are two types of translators. Difference between Compiler and Interpreter A compiler checks the entire program at a time and if it is error free then it produces the machine language instruction. The interpreter translates one statement at a time and if it is error free it produces the machine language instruction. Executing a program written in a high level language is a two step process. The code (source program) should be compiled (i.e., by either compiler or interpreter) to produce machine language instructions. Then the machine instructions are loaded into memory and it gets executed. COMPILERS vs. INTERPRETERS It is important to understand that a computer language defines the nature of a program and not the way that the program will be executed. There are two general methods by which a program can be executed. It can be compiled, or it can be interpreted. Although programs written in any computer language can be compiled or interpreted, some languages are designed more for one form or execution than the other. For example, Java was designed to be interpreted, and C was designed to be compiled. However, in the case of C, it is important to understand that it was specifically optimized as a compiled language. In its simplest form, an interpreter reads the source code of your program one line at a time, performing the specific instructions contained in that line. This is the way earlier version of BASIC worked. In languages such as Java, a program’s source code is first converted into an intermediary form that is then interpreted. In either case, a run-time interpreter is still required to be present to execute the program. A compiler reads the entire program and converts it into object code, which is a translation of the program’s source code into a form that the computer can execute directly. Object code is also referred to as binary code or machine code. Once the program is compiled, a line of source code is no longer meaningful in the execution of your program. In general, an interpreted program runs, slower than a compiled program. Hence a compiler converts a program’s source code into object code that a computer can execute directly. Therefore, compilation is a one time cost, while interpretation incurs an overhead each time a program is run.
4 Ways To Introduce The Walking Curriculum To Your Students Walking-focused learning activities are unlike most kinds of learning activities students are used to. A thoughtful and engaging introduction is important, as students need to be prepared to “pay attention” on these walks—despite what they might, at first, believe, the walks are not non-instructional time. The walks require concentration. It is likely (especially with older students) that they will need to “un-learn” the belief that outside time is non-instructional time. In truth, the things students are learning about in class can be enriched outdoors; their learning can take on an added dimension. The aim is to change their conceptions of the natural world so that they, too, become aware of what nature has to teach. This walking curriculum offers significant outdoor instructional time in terms of the opportunities it will offer students to increase their familiarity and emotional connections with place. Here are a 4 ways to evoke your students’ sense of wonder as you introduce the curriculum. Choose one—or a few—that would be most appropriate for the students you teach: Ask your students to walk around the schoolyard or follow you around the schoolyard for approximately 10 minutes. Do not give any particular instructions ahead of time. Upon returning from the walk, ask students what they noticed. Chances are they will have noticed very little. They will have spent more time talking and thinking “Cool! We get to go outside” than about anything that actually surrounds them. You might then suggest that the walking curriculum aims to help them to notice more details in the world around them, to hone the senses and tap into the richness of the world around them. You could stop with introduction 1, but you might also use the analogy of taking a test and receiving either a very high or low score. Ask students: Is 99% on a test a good score? Getting 99% percent right would be something to be proud of wouldn’t it? What if you got 99% wrong? That is to say, what if you missed 99%? Not so good. The bad news is, of course, that we ignore most of the mass of activity around us. Horowitz (2013) reminds us this is normal, so we shouldn’t be too hard on ourselves: “Part of normal human development is learning to notice less than we are able to. The world is awash in details of color, form, sounds—but to function, we have to ignore some of it” (p. 26). You might then tell students that your goal in using this curriculum is to tap into more of the richness that is in the world—even if we ignore it. You might explain that the walks will function to help them tune into all those details, to learn to notice them and enjoy them. Or option 3: With students, discuss—and put a spin on—what they think they “see” in the world around them. You might ask and discuss the following question: When you go for a walk, what do you see? Do you see all there is to see? Most of what there is to see? What we want students to realize is that our day-to-day engagement in the world is often limited. Like horses wearing blinders, we do not want to get distracted. You might share the following quote with students (if appropriate given the age of your students) as a means to spark further discussion or questions: “We see, but we do not see; we use our eyes, but our gaze is glancing, frivolously considering its object. We see the signs, but not their meanings. We are not blinded, but we have blinders” (Horowitz, 2013, pp. 8-9). Our aim is thus to try to remove the blinders and broaden our perceptual field through walking with different aims, perspectives, and intentions. Last but not least, option 4: Discuss the idea of “attention” with students. The idea of “paying attention” is certainly an expectation in most classrooms. Ask students: What does it mean to “pay attention”? What attracts your attention right now? The idea you want to evoke is that our ability to hone in and attune to the flow of sensory information around us. This ability has had great evolutionary value for our species but it is also the starting point of much of art, literature and scientific discovery. That being said, more often than not we ignore what’s going on around us. What these walks aim to do is increase students’ ability to discriminate. They aim to increase appreciation and awareness. By selecting things to pay attention to on the walks, we hope to expand students’ perceptual fields and increase their ability to observe particularities—however small—in the world around them. Share with students that, as a result of the curriculum, two things may happen 1) they may become more familiar with this place, and 2) they may experience the world differently. We can, of course, suggest the ordinary world around them is far from ordinary. Ask students if they have ever stared at a word so long that it suddenly seems odd or strange…if not, have them try it. Staring long enough at the word, well, word, can result it in looking like the squiggles it is composed of. It somehow becomes detached from its meaning. Suggest to students that we mostly take-for-granted the world around us. Its “taken-for-granted-ness” is the very reason we hardly notice it; these walks aim to, instead, reveal the extraordinary in the ordinary world. Reminders To Students Once the curriculum has been introduced it is still worthwhile each and every time to set the tone of the walking activities before students begin. Remind them of the purpose of walking. Encourage (or challenge) them to be as receptive as possible to the world they encounter. Their senses should be on high alert. Remind them to pay attention to details—to seek particularities—in their surroundings and, especially, the topic, theme or question of the walk. They need to become hoarders of details (Horowitz, 2013). Remind them what it means to be “mindful” of their actual bodies—how they feel, how they move. Ensure students are aware of the basic guidelines and your expectations (perhaps develop those expectations together as you would “classroom rules”). Expectations/guidelines may include a) focusing on the theme or task at hand; b) not disturbing others; c) staying within the determined parameters of the schoolyard; d) observing without harming. Depending on your pedagogical plan, you might also want to introduce ahead of time what students will be doing after the walk (e.g. what curricular connections will be made or what activity they will do as a follow-up.)
Effective Nuclear Charge Explain how nuclear charge loses The effective nuclear charge is the "pull" that the specific electron "feels" from the nucleus. Example; hydrogen atom contains one proton and one electron. The effective nuclear charge on electron is equal 1 in hydrogen atom. Helium contains two protons and two electrons. But there is not the effective nuclear charge equal 2 on each electron in helium atom, it is about 1.7 units. There is some rules for calculation the effective nuclear charge in modern physics. But there are not analyses about it. According CPH Theory it is explainable how nuclear charge loses in its path from nucleus to electrons. Of the first time, Slater had give a simple rule for calculate the effective nuclear charge on any electron in any atom. Specifically, Slater's Rule determines the shielding constant which is represented by S. To determine the effective nuclear charge use this equation: Where Z* is effective nuclear charge Z atomic number. According Staler rule you must order the configuration differently then what you are used to. Group each electron like this: Electrons to the right of the electron you have chosen do not contribute because they don't shield. In the same group, each electron shields 0.35. For electron in s or p, when n>1 S = 1.00 N2 + 0.85 N1 + 0.35 N0 Electrons in n-2 Electrons in n-1 Electrons in calculation orbit Example: As from a 3d perspective (Its nuclear has 33 protons); 2s2 , 2p6 3s2 , 3p6 Clementi and Raimondi; Clementi and Raimondi did their work on effective nuclear charges in the early 1960s. By this time, there was a great deal of background work that had been done on orbital and molecules. And the computer had been invented! This gave them the ability to incorporate self-consistent field (SCF) wave functions for the hydrogen to krypton atoms into their calculations. They didn't have to rely on Slater-type orbital which, for simplicity of calculation, didn't contain nodes. They were thus able to go to a greater depth with a refined mathematical model, and this allowed Them for clearly distinguish the s-orbital from the p-orbital in determining their set of rules. Specifically, they had a better model for dealing with electron penetration of the inner core. The results of Clementi?s method are difference of Staler's rule. For example Clementi calculated for As atom from a 3d perspective Z*=17.378 (Staler's is equal 12.7). There are no analyses Staler's rule and Clementi's method based on experiments. There is no any analytic concept why and how the strongly of nuclear charge does lose? The effective nuclear charge leads we have a new looking on force and relationship between force and energy. Is force perishable? If force is not perishable, why the effective of nuclear charge does change of an orbit to other orbit? What happens for the strongly of nuclear charge in during its traveling toward electrons? Is force convertible? If force is convertible, it does convert to what? When an electron accelerates toward a proton, then energy of electron does increase. Question is that; what happens for the amount of force? According CPH theory force and energy are convertible. Force converts to energy and energy changes to force. I will explain the effective nuclear charge by CPH theory. Work is quantized; Theoretical physics and evidence show energy is quantized. Also, when force applied on a particle/object, energy of particle/object does change. Relations; Show if energy is quantized then work can not be continually. When a photon is falling in a gravitational field, its energy does increase. But energy of photon is quantized. So, work of gravity force must be quantized. Also, when an electron accelerates in an electrical field, the energy of electron does change. But energy of electron is quantized, so work of electrical force is quantized too. But d (distance) is continually, so F (gravity force or electric force) is quantized. How we can define a quantum of force? Before we define a quantum of force, we must define a quantum of work. So, we need select a short length for that. I propose Lp (Planck Length) for that. It is equal; Lp=1.6x10 power minus 35 m. Also, I defined a quantum of gravity force (in CPH Theory) Fg, that is equal; So, a quantum of work is And at usual case W=nWq, n is an integer number. (n=...-2, -1, 0, 1, 2...) Force and Energy are convertible to each other; I take a shot with mass m. I shoot it with velocity v upward the earth. Shot takes kinetic energy. In during shot is traveling upward, gravity force works on it. Gravity work is negative, and shot's energy does decrease until shot does stop. Then shot falls toward the earth and its kinetic energy increases. When shot is moving upward, it loses energy equal 1/2mv2 that it is equal 1/2mv2=nFgLp, and shot's energy converts to n quantum gravity force. Also, when shot is falling n quantum gravity force converts to kinetic energy. We are not able show the intensity of gravity increases and decreases by moving a shot in gravitational field. But effective nuclear charge shows force loses in its path. Exchange Particles in Quantum Theory All elementary particles are either bosons or fermions (depending on their spin). The spin-statistics theorem identifies the resulting quantum statistics that differentiate fermions and bosons. Interaction of virtual bosons with real fermions is called fundamental interactions. Momentum conservation in these interactions mathematically results in all forces we know. The bosons involved in these interactions are called gauge bosons - such as the W vector bosons of the weak force, the gluons of the strong force, the photons of the electromagnetic force, and (in theory) the graviton of the gravitational force. In particle physics, gluons are vector gauge bosons that mediate strong color charge interactions of quarks in quantum chromodynamics (QCD). Unlike the neutral photon of quantum electrodynamics (QED), gluons themselves participate in strong interactions. The gluon has the ability to do this as it itself carries the colour charge and so interacts with itself, making QCD significantly harder to analyze than QED. Since gluons themselves carry color charge (again, unlike the photon which is electrically neutral), they participate in strong interactions. These gluon-gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to 10-15 meters, roughly the size of an atomic nucleus. The photon is the exchange particle responsible for the electromagnetic force. The force between two electrons can be visualized in terms of a Feynman diagram as shown below. The infinite range of the electromagnetic force is owed to the zero rest mass of the photon. While the photon has zero rest mass, it has finite momentum, exhibits deflection by a gravity field, and can exert a force. The photon has an intrinsic angular momentum or "spin" of 1, so that the electron transitions which emit a photon must result in a net change of 1 in the angular momentum of the system. This is one of the "selection rules" for electron transitions Exchange Particle in CPH Theory As I told before particle charges use color-charges that exist in perimeter produce virtual photons. Electron produces negative virtual photon and proton produces positive virtual photon. So, they put out electricity fields around themselves. Now look at two charge particle with different sign (an electron and a proton). Proton emits positive virtual photons. Photon moves toward the electron. Electron absorbs it. When photon enters into structure of electron, charge of electron does unbalance. So, electron does decay virtual photon to positive color charge toward the proton. But positive color-charges have positive effect charge, they were pulling electron beyond themselves. The same case happens for proton and negative virtual photon. And they (electron and proton) absorb each other. Any electrical interaction does like this. Now suppose a charge particle accelerates in electric field and its velocity does change. When energy of electron increases, electric force converts to energy. And when energy of electron decreases, energy does convert to electric force. Effective nuclear charge in CPH Theory; Current of electric force is like of gravity force's current. Difference between them is in the heir?s strongly. Suppose an atom contains n protons and n electrons. Electrons are rotating in their orbits around nucleus. Electron B is between nucleus and electron A. Given Fe is a quantum of electric force. Now suppose n1 electric force particles start their travel of nucleus toward electron A. n1=kn, n is number of protons in nucleus and k is a natural number. When these electric force particles reach to electron B, they work on it. (B is between nucleus and electron A). Then n2 electric force particles convert to energy, and energy of electron B does change. So, (n1-n2) electric force particles reach to electron A, and effective nuclear charge on A is equal Z*=Fe(n1-n2). Then electron A feels F=(n1-n2)Fe of nucleus. If there were electrons B, C, D... between nucleus and electron A, then n2, n3, n4 .. Convert to energy and [n1-(n2+n3+n4...)] reach to A. Then A feels effective nuclear equal Z*=Fe[n1-(n2+n3+n4...)]. When n1=n2+n3+n4..., then electron A never feels any effect of nuclear charge. Let come back to electron B and see what happened for it. When n2 electric force particles reach to B, B's energy changes, and it leaves its orbit. But B is not alone and other electrons and nucleus have effect on electron B. They do return B to its orbit. And its energy converts to force, this interaction is continually. If external forces that applied on an electron was being constant, then its energy and orbit is stable. But the strongly (and directions) of electric forces that applied on any electrons does change continually. So, energy of electron (and direction) is not constant and its velocity and orbit do change speedy. Also, the magnetic field of electron does change continually. So, this changing of magnetic field has effect on other electrons and nucleus. The spin and volume of nucleus do change, and it has effect on electrons and their orbits. So, electron oscillates around its orbit. Suppose two objects A and B absorb each other. By according CPH Theory a force particle leaves A and pulls it toward B, when force particle reaches to B, another leaves B and pulls it toward A and so on. In the following examples please attend that electrons are moving in their orbits, but Fz (nuclear charge) moves faster than electrons. Hydrogen atom contains one proton and one electron in 1s, so Fw=o and Fz=Fz*. Because there is no any other electron in hydrogen atom and Fw=0. Clementi supposed Fz*=1 Helium contains two protons and two electrons in 1s. Fz=2 from two protons moves toward electron1. Electron2 has electric charge and magnetic field. So, Fz acts on electron2. But direction of Fz is toward the electron1. So, electron2 does change direction of Fz. It depends to distance between electrons in this orbit. Suppose this effect is nothing. But, Fz works on electron2, energy of electron2 increases and Fz loses a part of its strong. So, the effective nuclear charge Fz* on electron1 given by; Fz*=Fz - Fw Energy of electron2 increases equal E=W. It leaves its orbit. But electric force leaves it toward nuclear and pulls electron2 toward nuclear. Also, electric force of electron1 acts on it. Then electron2 comes back to its orbit and loses energy E, and E converts to electric force equal Fw. Then Fw does add to Fz* that is coming back of electron1 and Fz=Fz*+Fw reaches to nuclear. So, nuclear feels that effective force of electron1 is equal Fz. The effective nuclear charge Fz* on electron2 is same as electron1. By according Clementi calculate Fz*=1.688 Lithium has 3 protons and 3 electrons, two electrons are in 1s and one electron is in 2s. Fz=3 from 3 protons moves toward electron1 in 1s orbit. This case is same as Helium, but radius of 1s orbits is smaller than in Helium and distance between electrons is less than Helium?s orbit. So deviation direction of Fz is less than in Helium. It shows the effect of deviation direction for Fz is less than Helium. By according Clementi's calculate Fz*=2.691. Do compare with Helium that Fz*=1.688. There is one electron in orbit 2s in Lithium. So, this electron feels Fz* that is coming of over the orbit 1s. Fz=3 leaves nuclear toward it. Fz works on two electrons in orbit 1s. Fz loses Fw1 for act on electron1, and Fw2 for act on electron2. So, when Fz reaches to orbit 1s, It comes up to F'z=Fz - (Fw1+Fw2). In during F'z is passing of orbit 1s, it works on the sum of electron1 and electron2. Suppose this work is equal Fw3. So, Fw=Fw1+Fw2+Fw3 and Fz*=Fz-Fw reaches to electron in orbit 2s. By according Clementi's calculation Fz*=1.279. When Fz* reaches to electron, then another electric force particle equal Fz* leaves it toward nuclear. When it does reach to 1s orbit, it works on that. In during Fz* is passing of orbit 1s, energy E=W converts to force Fw and Fz=Fz*+Fw reaches to nuclear. Compton Effect in Atom; Effective nuclear charge is like Compton Effect. In Compton effect, photon loses energy and electron takes energy. According CPH Theory in Compton Effect a number color-charge and magnetic-color leave photon and enter into electron. Effective nuclear is like it, in a difference. Photon is formed of positive color-charge, negative color charge and magnetic-color that make electromagnetic energy. So, electron keeps energy. But virtual photon is formed of negative (or positive) color-charges. And electron cannot keeps color-charge and loses speedy. Question is that; Gravity has effective force like effective nuclear charge? For other articles see
The youngest children (Hatchlings) are gently introduced to the Montessori materials, and the other children, until they feel secure, confident and safe in the community. During this stage of development, your child will naturally be drawn to the Practical Life materials. They will recognise some of the materials from your home environment, such as spoons, bowls and jugs and enjoy being able to use them independently. However, they are still able to move freely around the environment experiencing all of the materials if they choose to do so. The Direct Aim of the Practical Life materials is to teach your child tasks that he can use in his daily life and right through to adulthood. Feeding himself, pouring drinks, preparing food, getting dressed and caring for the environment. The Indirect Aim is for your child to develop independence, concentration, fine and gross motor skills and improve co-ordination. The type of materials your child will use during this stage of development are as follows: - Transferring water - Dressing frames - Opening and closing bottles, boxes and jars - Flower arranging - Wiping tables - Scrubbing tables and chairs - Holding objects on a tray
Question: 1. The angles of quadrilateral are in the ratio 3 : 5 : 9 : 13. Find all the angles of the quadrilateral. Answer: As you know angle sum of a quadrilateral = 360° Hence, angles are: 36°, 60°, 108°, 156° Question: 2. If the diagonals of a parallelogram are equal, then show that it is a rectangle. Answer: In the following parallelogram both diagonals are equal: As all are right angles so the parallelogram is a rectangle. Question: 3. Show that if the diagonals of a quadrilateral bisect each other at right angles, then it is a rhombus. Answer: In the given quadrilateral ABCD diagonals AC and BD bisect each other at right angle. We have to prove that AB=BC=CD=AD Similarly AB=BC=CD=AD can be proved which means that ABCD is a rhombus. Question: 4. Show that the diagonals of a square are equal and bisect each other at right angles. Answer: In the figure given above let us assume that DO=AO (Sides opposite equal angles are equal) Similarly AO=OB=OC can be proved This gives the proof of diagonals of square being equal. Question: 5. Show that if the diagonals of a quadrilateral are equal and bisect each other at right angles, then it is a square. Answer: Using the same figure, (Angles opposite to equal sides are equal) So, all angles of the quadrilateral are right angles making it a square. Question: 6. Diagonal AC of a parallelogram ABCD bisects angle A . Show that (i) it bisects angle C also, (ii) ABCD is a rhombus. Answer: ABCD is a parallelogram where diagonal AC bisects angle DAB As diagonals are intersecting at right angles so it is a rhombus Question: 7. In parallelogram ABCD, two points P and Q are taken on diagonal BD such that DP = BQ. Show that: With equal opposite angles and equal opposite sides it is proved that APCQ is a parallelogram Question: 8. ABCD is a parallelogram and AP and CQ are perpendiculars from vertices A and C on diagonal BD. Show that Question: 9. In ∆ ABC and ∆ DEF, AB = DE, AB || DE, BC = EF and BC || EF. Vertices A, B and C are joined to vertices D, E and F respectively. Show that (i) quadrilateral ABED is a parallelogram (ii) quadrilateral BEFC is a parallelogram (iii) AD || CF and AD = CF (iv) quadrilateral ACFD is a parallelogram (v) AC = DF In quadrilateral ABED So, ABED is a parallelogram (opposite sides are equal and parallel) So, BE||AD ------------ (1) Similarly quadrilateral ACFD can be proven to be a parallelogram So, BE||CF ------------ (2) From equations (1) & (2) It is proved that Similarly AC=DF and AC||DF can be proved 10. ABCD is a trapezium in which AB || CD and AD = BC. Show that Key Points About Quadrilaterals 1. Sum of the angles of a quadrilateral is 360°. 2. A diagonal of a parallelogram divides it into two congruent triangles. 3. In a parallelogram, (i) opposite sides are equal (ii) opposite angles are equal (iii) diagonals bisect each other 4. A quadrilateral is a parallelogram, if (i) opposite sides are equal or (ii) opposite angles are equal or (iii) diagonals bisect each other or (iv) a pair of opposite sides is equal and parallel 5. Diagonals of a rectangle bisect each other and are equal and vice-versa. 6. Diagonals of a rhombus bisect each other at right angles and vice-versa. 7. Diagonals of a square bisect each other at right angles and are equal, and vice-versa. 8. The line-segment joining the mid-points of any two sides of a triangle is parallel to the third side and is half of it. 9. A line through the mid-point of a side of a triangle parallel to another side bisects the third side. 10. The quadrilateral formed by joining the mid-points of the sides of a quadrilateral, in order, is a parallelogram.
Meaning “around the tooth”, periodontal disease affects the gums that surround the teeth and the bones that support them. Plaque that is left to build up on teeth eventually changes from a sticky film into tartar (calculus). Together, plaque and tartar start to break down the gums and bone in the mouth. One common symptom of this disease is red, swollen and bleeding gums. It is estimated that four out of every five people have some stage of periodontal disease but are unaware of it. Typically, this is because the first stages of the disease are often painless. Periodontal disease is the number one cause of tooth loss, and it has also been associated with other serious diseases, such as bacterial pneumonia, diabetes, stroke, increased risk during pregnancy and cardiovascular disease. Researchers are currently trying to find out if the bacteria associated with periodontal disease affects the conditions of other systemic diseases, such as those listed. The risk of periodontal disease increases with people who smoke. You can reduce your risk of periodontal disease through good oral hygiene, a balanced diet and routine trips to Dr. Giaquinto. Signs and symptoms of periodontal disease: Dr. Giaquinto or our hygienist can diagnose this disease during a periodontal examination, which is included in your regular dental check up. A small dental instrument called a periodontal probe is used to measure the space (sulcus) between the teeth and the gums. A healthy sulcus should measure three millimeters or less and it should not bleed. The probe will indicate if the spaces are deeper than three millimeters. Deeper pockets typically indicate a more advanced stage of the disease. In addition to measuring the sulcus, Dr. Giaquinto will check for inflammation, tooth mobility and other signs that will help in making a diagnosis according to one of the below categories: The first stage of periodontal disease, gingivitis is characterized by tender, inflamed gums that are also likely to bleed during flossing or brushing. Plaque build up will eventually harden into calculus, commonly called tartar. This build up will cause the gums to recede away from the teeth, creating deep pockets where bacteria and pus can grow. At this stage, the gums are very irritated and bleed very easily. Beginning stages of bone loss may also be seen with periodontitis. As the gums, bone and other supporting ligaments are destroyed by periodontal disease, the teeth lose their strong anchoring. As a result, the affected teeth will start to become loose and may even fall out. Bone loss at this stage can be anywhere from moderate to severe. Treatment for periodontal disease is determined by the type and severity of the disease. Your dentist and hygienist will be able to make the best treatment recommendations for your situation. One or two regular cleanings are typically all that is necessary to clear up the early stages of gingivitis, when there has still been no bone damage. We will also provide you with tips on how to maintain healthy dental habits at home, so the disease does not return. More advanced stages of the disease require scaling and root planning (deep cleaning). Normally, this type of cleaning is done on one quadrant of the mouth at a time, and the area being treated is made numb. This procedure removes tartar, plaque and other toxins from above and below the gum line and on root surfaces. Cleaning out these toxins helps the gums to heal, shrinking the pockets back to a normal size. Depending on the patient, we may also recommend medication, mouth rinses and an electric tooth brush to help clear up the infection. If scaling and root planning does not clear up the problem, periodontal surgery may be necessary to get the pockets back to a normal size. Reducing the pocket size makes keeping your teeth clean much easier, and Dr. Giaquinto may recommend that you see a specialist in this field. If plaque is not removed within 24 hours after it forms on your teeth, it turns into tartar. Regular home dental care helps prevent the formation of plaque and tartar, but hard-to-reach places need to be cleaned regularly by Dr. Giaquinto to ensure all plaque build up is removed. After receiving treatment for periodontal disease, it’s very important to receive regular maintenance cleanings from a dentist or hygienist. These cleanings will provide Dr. Giaquinto the perfect opportunity to check the sulcus and ensure that your teeth and gums are healthy. Plaque and tartar that haven’t been removed by your daily cleaning efforts will be taken care of during this cleaning. You should schedule these check ups about four times a year. Your periodontal cleaning and examination will also include: Regular periodontal cleanings combined with good oral hygiene habits will help you to maintain good dental health, and they are effective preventive measures against the return of periodontal disease.
Doctors often cannot explain why one person develops cancer and another does not. But research shows that certain risk factors increase the chance that a person will develop cancer. These are the most common risk factors for cancer: - Growing older - Ionizing radiation - Certain chemicals and other substances - Some viruses and bacteria - Certain hormones - Family history of cancer - Poor diet, lack of physical activity, or being overweight Many of these risk factors can be avoided. Others, such as family history, cannot be avoided. People can help protect themselves by staying away from known risk factors whenever possible. If you think you may be at risk for cancer, you should discuss this concern with your doctor. You may want to ask about reducing your risk and about a schedule for checkups.Over time, several factors may act together to cause normal cells to become cancerous. When thinking about your risk of getting cancer, these are some things to keep in mind: - Not everything causes cancer. - Cancer is not caused by an injury, such as a bump or bruise. - Cancer is not contagious. Although being infected with certain viruses or bacteria may increase the risk of some types of cancer, no one can "catch" cancer from another person. - Having one or more risk factors does not mean that you will get cancer. Most people who have risk factors never develop cancer. - Some people are more sensitive than others to the known risk factors.
Prof Peter Chantler Molecular Biology of the Cell a. Choice of DNA The first step involves choosing which DNA to clone, genomic DNA or cDNA? If the cloned DNA is to be representative of the entire genome of a particular species (ie. genomic DNA - facilitating genome analysis, the control of gene expression, gene structure and intronic sequences), total DNA must be prepared from somatic cells of that species. Because genomic DNA is identical throughout all somatic cells of an individual, the exact cellular source of material is not important. However, if the DNA is to be representative only of the coding regions of those genes expressing proteins, complementary DNA (ie. cDNA - representing the mRNA population currently active in the cell and indicative of proteins being actively synthesized) must first be created. cDNA is so-called because it is complementary to cellular mRNA. Because many genes are switched off in different cells at different times, the identities and amounts of transcribed mRNA are in constant flux within a cell, varying at different stages of development and in response to the changing needs of the cell. Consequently, it is important that the tissue or cell type is carefully chosen with respect to age, function and stage of the cell cycle, prior to cDNA production. Back to Contents (Recombinant DNA) Course updated 6 Apr 2004
Light & Dark Learn about light & dark as well as light sources & reflections as you experiment with different objects in this fun activity. Does a mirror ball give out light or does it just reflect light from another source? What about a lamp, torch, animal or jacket? Play around with the objects and see what results you get. Learn the difference between light sources and reflections, which light sources give the brightest light, properties of sunlight and how wearing reflective strips can make cyclists stand out more so they are less likely to be hit by cars. Kids will enjoy experimenting with this interactive science game.
Basics of Induction Cooking Induction cooking uses a powerful electromagnet to generate heat energy in a metal pot or pan. The cooktop has a smooth surface without raised burners. The flat burners are heated by electrical current and while food cooks quickly, the cooking surface never feels hot. This means no heat is wasted during the cooking process, as the heat is focused solely on the vessel. This Old House adds that, “magnetic induction heats with an 85 to 90 percent efficiency rating, versus less than 70 percent for electric and about 50 percent for gas.” Why Ventilation is Necessary Stove top cooking can result in wonderful things, such as a charred steak topped with sautéed onions and mushrooms. Unfortunately, the actual cooking process can cause damage to your home. Grease builds up on cooking surfaces and nearby countertops while steam, left to concentrate and thicken, can rot walls. Ventilating hoods have fans to move the air and filters that trap grease and particles, keeping them from clogging ductwork and becoming a fire hazard. Induction Cooktops and Ventilation The argument for ventilation is greater for gas cooktops because gas cooking produces more steam and carbon monoxide than other cooking methods. Induction cook tops don't produce excess heat or steam, but ventilation and air movement still is required. Movement of air is measured in CFM, or cubic feet per minute. The Home Ventilating Institute recommends that cooktops against a wall have hoods that can move between 40 and 100 CFM while cooktops on an island are covered by hoods that move between 50 and 150 CFM. It is the size and location of your cooktop that dictates the size of the necessary ventilation. Considering Induction Cooktops Induction cooktops, while more energy efficient than both gas and electric, do have drawbacks. First, these cooktops require the use of ferrous metal cookware to complete the electromagnetic circuit. Ferrous metals contain iron and are magnetic. Pans made of steel and cast iron will work on an induction cooktop while those made of aluminum, copper and glass will not. Second, cooks who are used to waiting for pans to heat and water to boil will have to adjust their timing when working with an induction cooktop.