content
stringlengths
275
370k
Rotation refers to the turning of an object about a fixed axis and is very commonly encountered in day to day life. The motion of a fan, the turning of a door knob and the opening of a bottle cap are a few examples of rotation. Rotation is also commonly observed as a component of more complex motions that result as a combination of both rotation and translation. The motion of a wheel of a moving bicycle, the motion of a blade of a moving helicopter and the motion of a curveball are a few examples of combined rotation and translation. This module focuses on the kinematics of pure rotation. This module begins by defining the angular variables and then proceeds to describe the relation between these variables and the variables of linear motion. Consider an object rotating about a fixed axis. Let us define a coordinate system such that the axis of rotation passes through the origin and is perpendicular to the x- and y- axes. Fig. 1 shows a view of the x-y plane. If a reference line is drawn through the origin and a fixed point in the object, the angle between this line and a fixed direction is used to define the angular position of the object. In Fig. 1, the angular position is measured from the positive x-direction. Fig. 1: Rotating object - Angular position. Also, from geometry, the angular position can be written as the ratio of the length of the path traveled by any point on the reference line measured from the fixed direction to the radius of the path, The angular displacement of an object is defined as the change in angular position of the object. If the angular position changes from to , the angular displacement is expressed as ... Eq. (2) Consider the same object described above. Fig. 2: Rotating object - Angular displacement. If the object rotates from an angular position at time to an angular position at time , the average angular velocity is defined as ... Eq. (3) The instantaneous angular velocity , which is defined as the instantaneous rate of change of the angular position with respect to time, can then be written as the limit of the average velocity as approaches 0, ... Eq. (4) If the angular velocity of the object changes from at time to at time , the average angular acceleration is defined as ... Eq. (5) The instantaneous angular acceleration , which is defined as the instantaneous rate of change of the angular acceleration with respect to time, can then be written as the limit of the average acceleration as approaches 0, ... Eq. (6) Angular Acceleration and Angular Velocity as Vectors Mathematically, both angular velocity and angular acceleration behave as vectors. The "right-hand rule" is used to find the direction of these quantities. For the direction of the angular velocity , curl your hand around the axis of rotation such that your fingers point in the direction of rotation. Your extended thumb will then point in the direction of the vector. In more mathematical terms, the angular velocity unit vector can be written as the cross product of the position vector of the particle or any point on the object and its instantaneous velocity . ... Eq. (7) Similarly, the angular acceleration unit vector can be written as the cross product of the position vector of the particle or any point on the object and its instantaneous acceleration ... Eq. (8) The Relation Between Linear and Angular Variables If a particle or a point in an object rotating at a constant distance from the axis of rotation rotates through an angle , as shown in Fig. 1, the distance traveled is ... Eq. (9) Differentiating both sides of Eq. (9) with respect to time, gives ... Eq. (10) Which can be rewritten as ... Eq. (11) Where is the linear speed and is the angular speed. Once again, differentiating both sides of Eq. (11) with respect to time, we get ... Eq. (12) Which can be rewritten as ... Eq. (13) It must be noted that Eq. (13) gives the tangential acceleration since it represents the rate of change of the speed of the object which is the rate of change of the tangential component of the velocity of the object. The radial component is given by the following expression for the centripetal velocity, the derivation of which has been shown in the Uniform Circular Motion module, ... Eq. (14) In Vector form As mentioned in the previous section, the angular quantities also behave as vectors. The following equation is the relation between linear and angular velocity in vector form. ... Eq. (15) And the following equation is the relation between linear and angular acceleration in vector form. ... Eq. (16) Example 1: Rotating Disk Problem Statement: A compact disk is spinning about its central axis. The angular position of a point on the disk as a function of time is given by, where, is measured in seconds and is measured in radians. The distance of this point to the axis of rotation is 0.02 m. a) What is the angular displacement and distance traveled at seconds? How many complete rotations has the disk completed? b) What is the angular velocity and angular acceleration at seconds? Data: [ra d] Part a) Determining the angular displacement at 3 seconds. The angular displacement at 3 seconds is = 3.4 Using Eq. (9), the distance traveled is Therefore, after 3 seconds, the point on the disk has an angular displacement of 3.4 rad and travels 0.068 m. Since this angle is in radians, dividing it by Pi gives the number of rotations, at 5 digits = 1.0823 Therefore, at 3 seconds, the point on the disk has completed only 1 full rotation. Angular position plot Part b) Determining the angular velocity and angular acceleration at 3 seconds. The function for the angle can be differentiated successively to get an equation for the angular velocity and an equation for the angular acceleration, Therefore at 3 seconds, the angular velocity (in m/s) is, and the angular acceleration, which is constant, is 2.2 m/s2. Equations of Motion for Constant Angular Acceleration The equations of motion for rotation with a constant angular acceleration have the same form as the equations of motion for linear motion with constant acceleration. The following table contains the main equations. Table 1: Constant acceleration equations of motion for linear and rotational motion. Equation Linear Equation Angular Equation Number ... Eq. (17) ... Eq. (18) ... Eq. (19) Example 2: Race Car (with MapleSim) Problem Statement: A decelerating race car enters a high speed turn of radius 100 m with a speed of 200 km/h. The speed of the car is reducing at a rate of 5 m/s2. a) What is the angular displacement of the car after 2 seconds? b) What is the magnitude of the angular velocity and speed of the car after 2 seconds? c) What is the magnitude of the total acceleration of the car at the start of the turn? (Converting = the units to m/s) Part a) Determining the angular displacement of the car after 2 seconds. Eq. (16) can be used to find the angular displacement. First, the initial angular velocity and the angular acceleration need to be calculated. 5 at 5 digits = 9 The angular displacement is Therefore, after 2 seconds, the car has an angular displacement of 1.01 rad. Part b) Determining the angular velocity of the car after 2 seconds. The angular velocity of the car can be calculated using Eq. (15). The speed is Therefore, after 2 seconds, the angular velocity of the car is 0.46 rad/s and the speed of the car is 45.56 m/s2. The following plot shows the speed of the car (in km/h) vs. time. Speed (in km/h) vs. Time Part c) Determining the magnitude of the total acceleration of the car at the start of the turn. The total acceleration of the car is a combination of the tangential acceleration and the centripetal acceleration. The tangential acceleration is the rate of change of the speed of the car and is given in the problem. The centripetal acceleration can be calculated using the speed of the car at the turn which is . Using Eq. (14), the centripetal calculation is = The magnitude of the total acceleration is then at 5 digits = And, in terms of g, the acceleration is at 5 digits = 3.1872 Therefore, at the start of the turn, the magnitude of the acceleration of the car is 3.19g. Constructing the Model Step1: Insert Components Drag the following components into the workspace: Table 2: Components and locations Compon Location ent Signal Blocks > Common Signal Blocks > Common (2 required) 1-D Mechanical > Rotational > Motion Drivers Multibody > Bodies and Frames Multibody > Joints and Motions Multibody > Bodies and Frames Multibody > Visualization Multibody > Visualization Multibody > Sensors Signal Blocks > Mathematica l > Operators Step 2: Connect the components Connect the components as shown in the following diagram (the dashed boxes are not part of the model, they have been drawn on top to help make it clear what the different components are for). Fig 3: Component diagram It is also possible to replace the spherical geometry with a CAD model using the STL format for a more attractive visualization. Step 3: Create parameters Add a parameter block using the Add a parameter block icon ( ) in the workspace toolbar and then double click the icon, once it is placed in the workspace. Create parameters for the tangential acceleration , turn radius and initial speed (as shown in Fig. 4). Fig. 4: Parameter Block Settings Step 4: Adjust the parameters Return to the main diagram ( > ) and, with a single click on the Parameters icon, enter the following parameters (see Fig. 5) in the inspector pane. Fig. 5: Parameters Note: Step 3 and Step 4 are not essential and can be skipped. The parameter values can be directly entered for each component instead of using variables. However, creating a parameter block as described above makes it easy to repeatedly change the parameters and play around with the model to see the effects on the simulation result. Step 5: Change the parameters and initial conditions for the Position Command 1. Return to the main diagram, click the Constant component and enter for the parameter value ( ). 2. Click the first Integrator component that integrates the signal from the Constant component and enter for the initial value ( ) Step 6: Set up the Car Center of Mass Visualization 1. Click the Revolute component and change the axis of rotation ( ) to [0,1,0]. 2. Then click the Rigid Body Frame component and enter [r,0,0] for the x,y,z offset ( ). Step 7: Run the simulation 1. Place three Probes as shown in Fig. 3 and change the Simulation duration ( ) to 2 seconds in the Settings pane. 2. Click Run Simulation ( ). The following image shows the 3-D view of the simulation. Fig. 6: A 3-D view of the race car simulation. The following video shows the 3-D visualization of the simulation with a CAD model of a car. 2. Video 1: A 3-D view of the race car simulation with a CAD model attached Reference: Halliday et al. "Fundamentals of Physics", 7th Edition. 111 River Street, NJ, 2005, John Wiley & Sons, Inc.
Explain telemedicine, and describe how it is typically being used in either a rural or an urban setting at the present time. For the setting you chose, what are telemedicine’s overall strengths? – What are its overall weaknesses? Next, select an allied health profession and describe how telemedicine is now or could affect patient care in that field. In your responses to colleagues, select posts that discuss different settings and professions (if possible) and offer a fresh perspective or a novel approach as to how telemedicine could be more fully used. – Use at least 2 APA references – Answer questions straight to the point Telemedicine is a branch of healthcare that utilizes telecommunication technology to provide medical services remotely. It encompasses a broad range of applications, including video consultations, remote monitoring, and electronic health records. As such, telemedicine has the potential to greatly enhance healthcare access and delivery, particularly in underserved areas such as rural settings. In rural areas, telemedicine is being used to address the challenges associated with geographic distance and limited healthcare resources. For example, patients in rural communities often have to travel long distances to reach a healthcare facility, which can be time-consuming and costly. Telemedicine allows them to access healthcare services remotely, reducing the need for travel and making healthcare more convenient and accessible. One way telemedicine is used in rural settings is through teleconsultations. This involves connecting patients with healthcare professionals through video conferencing technology. Patients can communicate their medical concerns, receive advice, and even have their vital signs monitored remotely. This helps to bridge the gap between patients in remote areas and healthcare providers, ensuring that they receive timely and appropriate care. Another application of telemedicine in rural settings is telemonitoring. This involves remotely monitoring patients’ vital signs, such as heart rate and blood pressure, through wearable devices or sensors. The data is then transmitted to healthcare professionals who can assess the patient’s condition and provide appropriate interventions. This is especially useful for patients with chronic conditions who require regular monitoring but may find it difficult to visit a healthcare facility frequently. Telemedicine offers several strengths in rural settings. Firstly, it improves access to healthcare, particularly for individuals who are geographically isolated or have limited mobility. Patients no longer have to travel long distances to receive medical care, saving time and money. This also reduces the burden on healthcare facilities in rural areas, which often have limited resources and healthcare professionals. Secondly, telemedicine can enhance the efficiency of healthcare delivery. By connecting patients and providers remotely, healthcare services can be delivered in a more timely manner. This can reduce waiting times, allowing patients to receive care faster and potentially preventing the progression of their conditions. Thirdly, telemedicine can improve patient outcomes through enhanced care coordination. Teleconsultations and telemonitoring enable healthcare providers to collaborate and share information more easily. This can lead to better-informed decisions and improved coordination of care, ultimately resulting in better patient outcomes. However, telemedicine also has some weaknesses in rural settings. One of the major challenges is the lack of reliable internet connectivity in some rural areas. Without a stable internet connection, the effectiveness of telemedicine is compromised. Additionally, there may be limited availability of the necessary technological infrastructure and equipment in these areas, making it difficult to implement telemedicine initiatives. Moreover, telemedicine is not suitable for all types of medical conditions and situations. Some medical conditions require physical examination, diagnostic tests, or procedures that cannot be conducted remotely. In such cases, telemedicine may be limited in its ability to provide comprehensive care. One allied health profession that telemedicine has the potential to greatly affect is physical therapy. Physical therapists play a critical role in the management and rehabilitation of patients with musculoskeletal conditions, neurological disorders, and other physical impairments. Traditionally, physical therapy has relied heavily on in-person visits for assessments, treatments, and follow-up visits. However, telemedicine offers new opportunities to expand patient access to physical therapy services. Telemedicine can be used in physical therapy to provide remote assessments and consultations. Patients can communicate their symptoms and concerns to a physical therapist via video conferencing, allowing for a preliminary evaluation. Based on this evaluation, the physical therapist can provide recommendations for exercises, stretches, and modifications to daily activities. Telemonitoring can also be utilized to remotely assess and track patients’ progress. This allows for ongoing communication and adjustments to treatment plans without the need for frequent in-person visits. In summary, telemedicine holds tremendous potential in both rural and urban settings for enhancing healthcare access and delivery. In rural areas, it can overcome distance and resource challenges, improving access to care for underserved populations. Telemedicine’s strengths include improved access to care, increased efficiency, and enhanced care coordination. However, weaknesses such as unreliable connectivity and limitations in the scope of care need to be addressed. In the field of physical therapy, telemedicine can revolutionize patient care by enabling remote assessments, consultations, and monitoring.
Febrary 5, 2020 - Barbara Vonarburg, NCCR PlanetS Uranus and Neptune are the outermost planets of our solar system. In terms of their size, composition and distance from the sun, they are similar and clearly differ from the inner terrestrial planets and the gas giants Jupiter and Saturn. “However, there are also striking differences between the two planets that require explanation,” says Christian Reinhardt, who studied Uranus and Neptune together with Alice Chau, Joachim Stadel and Ravit Helled, all PlanetS members working at the Institute for Computational Science of the University of Zurich. “For example, Uranus and its major satellites are tilted about 97 degrees into the solar plane, and the planet effectively rotates retrograde with respect to the sun,” explains Joachim Stadel. Furthermore, the satellite systems are different. Uranus’ major satellites are on regular orbits and tilted with the planet, which suggests that they formed from a disk, similar to Earth’s moon. Triton instead, Neptune's largest satellite, is very inclined and therefore most likely a captured object. Finally, they could also be very different in terms of heat fluxes and internal structure. Similar formation – different collisions “It is often assumed that both planets formed in a similar way,” explains Alice Chau. This would readily explain their very similar masses, mean orbital separation from the sun and possibly composition. But where do the differences come from? Since impacts are common during the formation and early evolution of planetary systems, a giant impact was proposed as the origin of this dichotomy. However, prior work either only investigated impacts on Uranus or was limited due to strong simplifications in the impact calculations. The team of UZH scientists have now for the first time investigated a range of different collisions on both planets using high-resolution computer simulations on CSCS supercomputer "Piz Daint“. Starting with very similar pre-impact Uranus and Neptune, they showed that an impact of a body with one to three Earth masses on both planets could explain this dichotomy. In the case of Uranus, a grazing collision can tilt the planet without affecting the planet’s interior. On the other hand, a head-on collision for Neptune strongly affects the interior but does not result in the formation of a disk, and is therefore consistent with the absence of large moons on regular orbits. Such a collision, which mixes up the planet’s deep interior, is supported by the larger observed heat flux of Neptune. “We’ve clearly shown that an initially similar formation pathway to Uranus and Neptune can result in the dichotomy observed in the properties of these fascinating outer planets,” concludes Ravit Helled. Future NASA and ESA missions to Uranus and Neptune can provide new key constraints on such a scenario, improve our understanding of the formation of the solar system and provide a better understanding of exo-planets in this mass regime. The article was first published at UZH news > (Image on top: Neptune photographed by Voyager 2. - NASA/JPL) Reinhardt C, Chau A, Stadel J & Helled R: Bifurcation in the history of Uranus and Neptune: the role of giant impacts, Monthly Notices of the Royal Astronomical Society, stz3271, https://doi.org/10.1093/mnras/stz3271
A Gravitational Wave Observatory on the Moon Might “Hear” 70% of the Observable Universe Gravitational-wave astronomy is set to revolutionize our understanding of the cosmos. In only a few years it has significantly enhanced our understanding of black holes, but it is still a scientific field in its youth. That means there are still serious limitations to what can be observed. Currently, all gravitational observatories are based on Earth. This makes the detectors easier to build and maintain, but it also means the observatories are plagued by background noise. Observatories such as LIGO and Virgo work by measuring the distance shift between mirrors as a gravitational wave passes through the observatory. This shift is extremely small. For mirrors placed 4 kilometers apart, the shift is a mere fraction of the width of a proton. The vibrations of a truck driving down a nearby road will shift the mirrors much more than that. So LIGO and Virgo use statistics and models of black hole mergers to distinguish a true signal from a false one. Theoretical observation range for GLOC. Credit: Jani, et al Because of terrestrial background noise, current observatories focus on the high-frequency gravitational waves (10 – 1000 Hz) generated by black hole mergers. There has been discussion of building a space-based gravitational-wave observatory, such as LISA, which would observe low-frequency gravitational waves, such as those generated by early cosmic inflation. But many gravitational waves are in the intermediate range. To detect these, a recent study proposes building a gravitational-wave observatory on the Moon. The Moon has long been a coveted location for astronomers. Optical telescopes on the Moon wouldn’t suffer from atmospheric blurring, and unlike space-based telescopes such as Hubble and Webb, they wouldn’t be limited by the size of your launch rocket. Most of the ideas proposed have been very hypothetical, but as we look towards a human return to the Moon in the next decade they are becoming less so. Already NASA is studying the construction of a radio telescope on the far lunar surface. Building a lunar gravitational-wave observatory would be significantly more challenging, but not impossible. This recent study proposes a Gravitational-wave Lunar Observatory for Cosmology (GLOC). Rather than worrying about how such an observatory would be constructed, the study instead focuses on the sensitivity and observational limits of such an observatory. As you might expect, a lunar observatory wouldn’t suffer from the background vibrations that trouble Earth observatories. As a result, it could have a baseline four times longer than LIGO. This would give it a range on gravitational wave frequencies as low as a tenth of a Hertz. This would allow it to observe everything from stellar-mass binary mergers to those of intermediate-mass black holes. But it would also be able to observe the same type of mergers as LIGO and Virgo at much greater distances. Distances so far that the gravitational waves have become very red-shifted. If constructed, GLOC would be able to use distant merger events to measure the rate of cosmic expansion across billions of years. This would be perhaps its greatest power because it would allow us to measure the Hubble parameter across much of cosmic history. We would finally learn whether cosmic expansion is part of the structure of spacetime, or whether it varies in time and space. Of course, the GLOC proposal is purely hypothetical at this point. It will be at least decades before we would be able to build such an observatory. But this study shows that building such a telescope would be worth the effort. Reference: Jani, Karan, and Abraham Loeb. “Gravitational-wave lunar observatory for cosmology.” Journal of Cosmology and Astroparticle Physics 2021.06 (2021): 044. Comments are closed.
Mesopotamian Religion, also known as Assyro-Babylonian religion, included a series of belief systems of the early civilizations of the Euphrates valley. The development of the religion of this region was not only important in the history of the people who practiced it, but also strongly influenced the semitic peoples from who the Hebrew religious tradition evolved. Moreover, many of the older Mesopotamian religious ideas worked their way west into the Greek and Roman culture as well. Mesopotamian religion left a profound mark on human civilization. Both the Judeo-Christian and Graeco-Roman tradition have inherited much from the religion of the "Land between the Rivers." The periods in the development of the Babylonian-Assyrian religion may be divided as follows: - The oldest period was from c. 3500 B.C.E. to the time of Hammurabi (c. 1700 B.C.E.). During this period, few historical records have been preserved. The deities later known as the Anunnaki may have been worshiped individually in various population centers. As major centers came to dominate the region, their deities came to be more universally recognized and to assimilate the characteristics of some of the lesser gods. Several major deities arose, such as Innana/Ishtar, Anu, Enki, Enlil, and others. The great city of Uruk emerged as a major religious center. Other centers included Nippur, Ur, Sippar, Eridu, and Agade. The greatest religious-literary event of the era was the creation of the Epic of Gilgamesh, the world's oldest surviving epic poem. - The post-Hammurabic period in Babylonia ranged between 1700-1365 B.C.E. Hammurabi united the Euphratean states, and the god Marduk began to emerge as the supreme deity, though by no means the only god. His heroic rise to power and recognition as the king of the gods is dramatically portrayed in the myth known as Enuma Elish. - The Assyrian period was between c. 1365 B.C.E. and the destruction of Nineveh in 612 B.C.E. The Mesopotamian pantheon remained little changed during this period, although at times the supreme deity was seen to be Ashur rather than Marduk. Ishtar remained the most important female deity. Astral theology emerges with Marduk or Ashur as the central divinity who assigned the various other gods their respective places in the universe. - The neo-Babylonian period began with Nabopolassar (625 B.C.E.-605 B.C.E.) and ended with Cyrus's conquest of Babylon and Babylonia in 539 B.C.E. By the sixth century B.C.E., the gods Anu, Enlil, and Ea (Enki) formed a triad ruling the universe, and a well developed astral theology had emerged, related to today's astrological systems. Marduk remained central, and it was to him that Cyrus dedicated his policy of increased religious freedom, supporting the return of plundered religious items to their respective sanctuaries, and the rebuilding of local or national temples, including the Temple of Jerusalem. Early Mesopotamian religion As outsiders looking in on an ancient civilization whose diverse religious traditions died out long ago, scholars have struggled to construct a comprehensive picture of Mesopotamian religion without resorting to a great deal of speculation or oversimplification. This problem led one expert in the field, A. Leo Oppenheim, to conclude that a history of Mesopotamian religion "should not be written." For one thing, the sources are relatively scarce, and they are scattered over a wide area and an even wider span of time. What may be a true statement about Mesopotamian religion in one period may thus be misleading when applied to a later time. A god that was a local deity prior to 2,000 B.C.E. may become a major regional god later on, and it is difficult to say with certainty how far a deity's influence was felt until a relatively late period. The study of Mesopotamian religion is also complicated, especially in its early phase, by the fact that similar deities are often given different names in the Sumerian and Akkadian languages. Non-experts have trouble realizing that Inanna and Ishtar, or Enki and Ea, are actually names of just two, not four deities, for example. In addition, over the period of millennia, as the gods evolved from local deities to more universal ones, they sometimes took on the attributes of older gods or of each other. Thus, even the character of the gods often involves considerable speculation. A divine genealogy The early deities of Mesopotamia were later referred to as the Anunnaki gods—a group of Sumerian and Akkadian deities related to, and in some cases overlapping with, the Annuna (the "Fifty Great Gods"). The head of the Anunnaki council, in later mythology, was Anu. The Anunnaki were seen as the children of Anu (heaven) and Ki (earth), brother and sister gods, themselves the children of Anshar and Kishar (Skypivot and Earthpivot, the Celestial poles). Anshar and Kishar in turn were the children of Lahm and Lahmu ("the muddy ones"). The parents of Lahm and Lahmu were Apsû (fresh water) and Tiamat (salt water). In the Enuma Elish, Tiamat is the sea goddess, personified as a female sea monster and an embodiment of primordial chaos. She gives birth to the first generation of gods; but she later makes war upon them and is split in two by the storm-god Marduk, who uses her body to form the heavens and the earth. However, the text of Enuma Elish is relatively late. It is difficult to know much about how the Anunnaki may have been conceived of or worshiped in earlier centuries. Moreover, although many early Mesopotamian religious temples and monuments have been discovered, texts and inscriptions are relatively rare. Among the religious texts that have been discovered, three types have been identified: Prayers, rituals, and mythologies. Temples and monuments also describe something of the religious culture and practice, while icons and other art elaborate on religious ritual and mythology. There is evidence that religious temples and rituals played an important part in Mesopotamian life quite early, preceding even the advent of writing. Temples normally occupied the central and highest ground in a settlement. They possessed the town's most sophisticated and high-quality artifacts. Uruk was one of the oldest and most important cities of ancient Sumer. According to the Sumerian king list, Uruk was founded by Enmerkar, who brought the official kingship with him. In the epic, Enmerkar and the Lord of Aratta, he is also said to have constructed the famous temple called E-anna, dedicated to the worship of Inanna (later called Ishtar). Uruk was also the capital city of the probably historical king Gilgamesh, hero of the famous Epic of Gilgamesh. According to the Bible (Genesis 10:10), Erech (Uruk) was the second city founded by Nimrod in Shinar. The White Temple of Uruk contained several separate shrines within the confines of its walls, which measured 400 by 200 meters. In addition to temples, the stepped-stone towers known as ziggurats were also common. One of these is no doubt the basis for the biblical story of the Tower of Babel. The original seat of the worship of Anu, the Sumerian god of heaven (or sky), may also have been in Uruk. Various other deities were associated with other cities. The impact of Hammurabi A sharp distinction can be made between the pre-Hammurabic age and the post-Hammurabic age. Before 1700 B.C.E., there were a number of religious centers in addition to Uruk: Nippur, Kutha (Cuthah), Ur, Sippar, Shirgulla (Lagash), Eridu, and Agade. Each tended to honor a specific god, which was looked upon as the chief deity, around whom were gathered a number of minor deities and with whom there was invariably associated a female consort. The period around 1700 B.C.E., when Hammurabi effected the union of the Euphratean states, marks the beginning of a new epoch in the religion of the Euphrates valley. In the post-Hammurabic period, the pantheon assumed distinct shapes. The deity Marduk began to emerge as the central and supreme deity, though by no means the only god. Paralleling the centralization of political administration, the gods of the chief religious centers, together with those of the minor local shrines, formed a group around Marduk. Despite a decided progress toward a monotheistic conception of divine government of the universe, the recognition of a large number of gods and their consorts by the side of Marduk remained firmly embedded doctrine in the Babylonian religion, as it did in the Assyrian faith. An important variation, however, was that the role of the head of the pantheon in Assyria was held by Ashur rather than Marduk. Earlier, the goddess Inanna (or Ishtar) came to be widely honored, as did male counterparts to the goddess, such as Enlil and Enki. However, under Hammurabi's reign, Marduk—the patron deity of the future capital, Babylon—became the clear head of the Babylonian pantheon. Associated with Marduk was a female consort called Sarpanit, who may have been identified with Ishtar/Inanna in the popular imagination. Grouped around this pair, as princes around a throne, were the chief deities of the older religious centers: Ea and Damkina of Eridu; Nabu and Tashmit of Borsippa; Nergal and Allatu of Kutha; Shamash of Sippar; Sin and Ningal of Ur, as well as other deities whose locations are unknown. In this process of accommodating ancient prerogatives to new conditions, the attributes belonging specifically to the older gods were transferred to Marduk, who thus became an eclectic and many-faceted power, taking on the traits of Enlil (wind, rain, fertility), Enki/Ea (intelligence, water), Shamash (the sun), Nergal (underworld), Adad (storm), and Sin (the moon). The epic mythology contained in the text of Enuma Elish describes the legendary version of Marduk's rise to power over the older gods. Scholars theorize that the older incantations originally associated with Ea, were re-edited so as to give to Marduk the supreme power over demons, witches, and sorcerers. Hymns and lamentations composed for the cult of Enlil, Shamash, and Adad were transformed into paeans and appeals to Marduk. Meanwhile, the ancient myths arising in the various religious and political centers underwent a similar process of adaptation to changed conditions. Besides the chief deities and their consorts, various minor ones, representing patron gods of less important localities were added at one time or another to the court of Marduk. Thus the Enuma Elish closes with a list of the myriad divine titles by which Marduk would be known after his great victory. However, some lesser deities still retained their independence. For example, Anu was still the god of the high heavens, and Ishtar still symbolized fertility and vitality in general. Rivalry between Ashur and Marduk Originally the patron god of the city which bore his name, Ashur came to hold the same position in the north that Marduk occupied in the south. The religious predominance of the great city of Babylon served to gain recognition for Marduk even on the part of the Assyrian rulers. Even when they became predominate, they appointed their sons or brothers governors of Babylonia, and in the long array of titles that the kings gave themselves, a special phrase was set aside to indicate their mastery over Babylonia. To "take the hand of Bel-Marduk" was an essential ritual preliminary to exercising authority in the Euphrates valley. Marduk and Ashur became rivals only when Babylonia came to give the Assyrians trouble. In 689 B.C.E., the Assyrian king Sennacherib, whose patience had been exhausted by the difficulties encountered in maintaining peace in the south, besieged and destroyed the city of Babylon. He brought the city's statue of Marduk to Nineveh, to symbolize the god's subordination. His grandson, Assur-bani-pal, with a view of reestablishing amicable relations, restored the statue to its place in Babylon and performed the time-honored ceremony of "taking the hand of Bel" to demonstrate his homage to the ancient head of the Babylonian pantheon. Other than the substitution of Ashur for Marduk, the Assyrian pantheon was basically the same as that in the south, though some of the gods were endowed with attributes which differ slightly from their southern counterparts. The war-like nature of the Assyrians was reflected in their conceptions of the gods, who stood by the side of the great protector Ashur. The cult and ritual in the north likewise followed the models set up in the south. Hymns composed for the temples of Babylonia were transferred to Assur, Calah, Harran, Arbela, and Nineveh in the north. Myths and legends also found their way to Assyria in modified form. To all practical purposes, however, the religion of Assyria was very similar with that practiced in the south. Triads of gods Much like El in Canaan, Anu remained more or less a distant deity during the various periods of the Babylonian-Assyrian religion. By the sixth century B.C.E., Anu's position as the chief god found expression in his portrayal as the first figure of a triad consisting of Anu, Enlil and Ea (also called Enki), who reigned over the heavens, the earth, and the watery expanse, respectively. The mother goddess, Ishtar, remained a powerful presence in her own right, often associated with male deities as their consort or as a fierce warrior and protector. She was frequently associated with Marduk, and still more closely with the chief god of Assyria, Ashur, who occupied in Northern Mesopotamia a position similar to that of Marduk in the south. By the side of the first triad, consisting of Anu, Enlil, and Ea, was sometimes found a second triad composed of Shamash, Sin, and Ishtar. As the first triad symbolized the three divisions of the universe—the heavens, earth, and the watery element—so the second represented the three great forces of nature: The sun, the moon, and the life-giving power. In addition, at times Ishtar also appears in hymns and myths as the general personification of nature and fertility. A seventh great Sumerian deity, the mother goddess Ninhursag/Ninmah, seems to have declined in popularity as Ishtar's popularity increased. Astral theology served as the theoretical substratum of the Babylonian religion, and was equally pronounced in the religious system of Assyria. The essential feature of this astral theology is the assumption of a close link between the movements going on in the heavens and occurrences on earth. This led to identifying the gods and goddesses with heavenly bodies and to assigning the seats of all the deities in the heavens. Marduk, the supreme deity, was portrayed as the one who set the celestial bodies in their places and ruled over them all. The personification of the two great luminaries—the sun and the moon (Shamash and Sin)—was the first step in the unfolding of this system. This process led to identifying the planet Venus with Ishtar, Jupiter with Marduk, Mars with Nergal, Mercury with Nabu, and Saturn with Ninurta. To read the signs of the heavens was to understand the meaning of occurrences on earth. With this accomplished, it was also possible to foretell what events were portended by the position and relationship to one another of sun, moon, planets, and certain stars. Myths that symbolized changes in season or occurrences in nature were projected on the heavens, which were mapped out to correspond to the divisions of the earth. All the gods, great and small, had their places assigned to them in the heavens. Facts, including political history, were interpreted in terms of astral theology. Worship, originally an expression of animistic beliefs, took on the character of an "astral" interpretation of occurrences and doctrines. This left its trace in incantations, omens, and hymns. It also gave birth to astronomy, which was assiduously cultivated because a knowledge of the heavens was the very foundation of the system of belief unfolded by the priests of Babylonia and Assyria. As an illustration of the manner in which the doctrines of the religion conformed with all-pervading astral theory can be seen in the development of the concept of the three gods Anu, Enlil and Ea. Anu became the power presiding over the heavens. Enlil ruled the earth and the atmosphere immediately above it, while Ea ruled over the deep. With the transfer of all the gods to the heavens, and under the influence of the doctrine of the correspondence between the heavens and the earth, Anu, Enlil and Ea became the three "ways" of the divine realm. The "ways" appear in this instance to have been the designation of the ecliptic circle, which was divided into three sections or zones—a northern, a middle and a southern zone, Anu being assigned to the first, Enlil to the second, and Ea to the third zone. Religious practice and rituals The most noteworthy outcome of this system in the realm of religious practice was the growth of a sophisticated method of divining the future by the observation of the phenomena in the heavens. In the royal collection of cuneiform literature—made by King Assur-bani-pal of Assyria (668-626 B.C.E.) and deposited in his palace at Nineveh—the omen collections connected with the astral theology of Babylonia and Assyria form the largest class. There are also indications that the extensive texts dealing with divination through the liver of sacrificial animals, based as it is on the primitive view which regarded the liver as the seat of life and of the soul, were brought into connection with astral divination. Less influenced by the astral-theological system are the older incantation texts. These included formulae and prayers produced in different religious centers and updated to conform to the tendency to centralize the worship of Marduk and his female counterpart in the south and Ashur and Ishtar in the north. Incantations originally addressed to Ea as the god of the watery element and to Nusku as the god of fire, were likewise transferred to Marduk. This was done by making Ea confer on Marduk as his son, the powers of the father, and by making Nusku, a messenger between Ea and Marduk. Ritual was a chief factor in the celebration of festival days and is relatively free from traces of the astral theology. The more or less elaborate ceremonies prescribed for the occasions when the gods were approached are directly connected with the popular elements of the religion. Animal sacrifice, libations, ritual purification, sprinkling of water, and symbolical rites of all kinds, accompanied by short prayers, represent a religious practice which is older than any theology and survives the changes which the theoretical substratum of the religion undergoes. References in the Epic of Gilgamesh and elsewhere to the priestesses of Ishtar as sacred prostitutes indicate the tradition of hieros gamos, in which the king or other representatives of the male principle would engage in sexual acts with the priestesses as representatives of Ishtar in a tradition designed to propitiate the fertility of crops, livestock, and human beings. On the ethical side, the religion of Babylonia more particularly, and to a less extent that of Assyria, advances to noticeable conceptions of the qualities associated with the gods and goddesses and of the duties imposed on man. Shamash, the sun-god, was invested with justice as his chief trait. Marduk is portrayed as full of mercy and kindness. Ea is the protector of mankind. The gods, to be sure, are easily aroused to anger. No sharp distinction is made—as in Israelite prophetic religion—between moral offenses and ritualistic oversight or neglect. However, the stress laid on the need of being clean and pure in the sight of the higher powers, the inculcation of a proper aspect of humility, and above all the need of confessing one's guilt and sins without any reserve. Regarding life after death, throughout Babylonian-Assyrian history, the conception prevailed of a large dark cavern below the earth, not far from the Apsu—the fresh water abyss encircling and flowing underneath the earth—in which all the dead were gathered and where they led a miserable existence of inactivity, amid gloom and dust. Occasionally a favored individual was permitted to escape from this general fate and placed in a pleasant island. The influence exerted by the Babylonian-Assyrian religion was particularly profound on the Semites, while the astral theology affected the ancient world in general, including the Greeks and Romans. Scholars can easily trace such sublime pagan deities as Venus to Ishtar, Jupiter to Marduk, etc. The Israelite and Jewish religion itself was strongly influenced by the remarkable civilization unfolded in the Euphrates valley. In many of the traditions embodied in the Old Testament, traces of direct borrowing from Babylonia may be discerned: For example, the story of Noah's flood (Epic of Gilgamesh) and the creation account of the early verses of Genesis (Enuma Elish). Indirect influences have been noticed in the domain of the prophetical books and the Psalms. The Babylonian influence on so-called "Wisdom Literature" has also been much discussed. During the Babylonian Exile of the Jews, it would be to Marduk that Cyrus the Great attributed his policy of allowing the Jewish and other captive priests to return to their capitals and refurbish the sacred temples of their formerly deposed deities. Even in the New Testament period, Babylonian-Assyrian influences may be present. In such a movements as early Christian gnosticism, Babylonian elements—modified, to be sure, and transformed—are present. The growth of apocalyptic literature, both Jewish and Christian seems to be influenced to some degree at least by the astral-theology of Babylonia and Assyria. - ↑ A. Leo Oppenheim, Ancient Mesopotamia: Portrait of a Dead Civilization (University of Chicago Press, 1974), p. 171. ReferencesISBN links support NWE through referral fees - Beaulieu, Paul-Alain. The Pantheon of Uruk During the Neo-Babylonian Period. Leiden: Brill, 2003. ISBN 9789004130241. - Gordon, Cyrus, and Gary Rendsburg. The Bible and the Ancient Near East, 3rd Edition. New York: W.W. Norton and Company, Inc., 1998. ISBN 978-093316896. - Holloway, Steven W. "Aššur Is King! Aššur Is King! Religion in the Exercise of Power in the Neo-Assyrian Empire." In Culture and History of the Ancient Near East. Leiden: Brill, 2002. ISBN 9781417590926. - Jacobsen, Thorkild. The Treasures of Darkness: A History of Mesopotamian Religion. New Haven: Yale University Press, 1976. ISBN 9780300018448. - Linssen, Marc J.H. The Cults of Uruk and Babylon: The Temple Ritual Texts As Evidence for Hellenistic Cult Practices. Leiden: Brill, Styx, 2004. ISBN 9789004124028. - Oppenheim, A. Leo, and Erica Reiner. Ancient Mesopotamia: Portrait of a Dead Civilization. Chicago: University of Chicago Press, 1977. ISBN 9780226631875. - Rochberg, Francesca. The Heavenly Writing: Divination, Horoscopy, and Astronomy in Mesopotamian Culture. Cambridge: Cambridge University Press, 2004. ISBN 9780521830102. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: The history of this article since it was imported to New World Encyclopedia: Note: Some restrictions may apply to use of individual images which are separately licensed.
This article will explain how nutrient percentages are calculated and why they may seem inaccurate. How Nutrients Are Calculated Nutrient percentages are the percentage of the day’s calories coming from each macronutrient. They are calculated using the following formula: Carbohydrate calories = Carbs (g) * 4.0 Protein calories = Protein (g) * 4.0 Fat calories = Fat (g) * 9.0 Why Nutrients May Be Inaccurate In the example above, the fat calories per gram are higher than protein and carbohydrates. For this reason, the fat percentage may be higher even if your fat grams are not the highest. If the food or calories you log are not consistent with this formula, the calculation for your macronutrients will be inaccurate.
For First Time Ever, Carbon Nanotube Transistors Have Outperformed Silicon It has begun. For the first time, scientists have built a transistor out of carbon nanotubes that can run almost twice as fast as its silicon counterparts. This is big, because for decades, scientists have been trying to figure out how to build the next generation of computers using carbon nanotube components, because their unique properties could form the basis of faster devices that consume way less power. “Making carbon nanotube transistors that are better than silicon transistors is a big milestone,” said one of the team, Michael Arnold, from the University of Wisconsin-Madison. “This achievement has been a dream of nanotechnology for the last 20 years.” First developed back in 1991, carbon nanotubes are basically minuscule carbon straws that measure just 1 atom thick. Imagine a tiny, cylindrical tube that’s approximately 50,000 times smaller than the width of a human hair, and made from carbon atoms arranged in hexagonal arrays. That’s what a carbon nanotube wire would look like if you could see it at an atomic level. Because of their size, carbon nanotubes can be packed by the millions onto wafers that can act just like a silicon transistor – the electrical switches that together form a computer’s central processing unit (CPU). Despite being incredibly tiny, carbon nanotubes have some unique properties that make them an engineer’s dream. They’re more than 100 times stronger than steel, but only one-sixth as heavy. They’re stretchy and flexible like a thread of fabric, and can maintain their 1-atom-thick walls while growing up to hundreds of microns long. “To put this into perspective,” says Washington-based carbon nanotubes producer, NanoScience Instruments, “if your hair had the same aspect ratio, a single strand would be over 40 metres long.” And here’s the best part: just like that other 1-atom-thick wonder-material,graphene, carbon nanotubes are one of the most conductive materials ever discovered. With ultra-strong bonds holding the carbon atoms together in a hexagonal pattern, carbon nanotubes are able to produce a phenomenon known aselectron delocalisation, which allows an electrical charge to move freely through it. The arrangement of the carbon atoms also allows heat to move steadily through the tube, which gives it around 15 times the thermal conductivity and 1,000 times the current capacity of copper, while maintaining a density that’s just half that of aluminium. Because of all these amazing properties, these semiconducting powerhouses could be our answer to the rapidly declining potential of silicon-based computers. Right now, all of our computers are running on silicon processors and memory chips, but we’ve about hit the limit for how fast these can go. If scientists can figure out how to replace silicon-based parts with carbon nanotube parts, in theory, we could bump speeds up by five times instantly. But there’s a major problem with mass-producing carbon nanotubes – they’re incredibly difficult to isolate from all the small metallic impurities that creep in during the manufacturing process, and these bits and pieces can interrupt their semiconducting properties. But Arnold and his team have finally figured out how to get rid of almost all of these impurities. “We’ve identified specific conditions in which you can get rid of nearly all metallic nanotubes, where we have less than 0.01 percent metallic nanotubes,” he says. As Daniel Oberhaus explains for Motherboard, the technique works by controlling the self-assembling properties of carbon nanotubes in a polymer solution, which not only allows the researchers to clean out impurities, but also to manipulate the proper spacing of nanotubes on a wafer. “The end result are nanotubes with less than 0.01 percent metallic impurities, integrated on a transistor that was able to achieve a current that was 1.9 times higher than the most state-of-the-art silicon transistors in use today,” he says. Simulations have suggested that in their purest form, carbon nanotube transistors should be able to able to perform five times faster or use five times less energy than silicon transistors, because their ultra-small dimensions allow them to very quickly switch a current signal as it travels across it. This means longer-lasting phone batteries, or much faster wireless communications or processing speeds, but scientists have to actually build a working computer filled with carbon nanotube transistors before we can know for sure. Arnold’s team has already managed to scale their wafers up to 2.5 by 2.5 cm transistors (1 inch by 1 inch), so they’re now figuring out how to make the process efficient enough for commercial production. The research has been published in Science Advances. As a Futurism reader, we invite you join the Singularity Global Community, our parent company’s forum to discuss futuristic science & technology with like-minded people from all over the world. It’s free to join, sign up now!
Why is uranium fissionable and not, say, aluminum? The short answer is "aluminum's not big enough." Here's why: The nuclear force saturates due to its short range, meaning that heavier nuclei have protons that don't attract each other via the nuclear force but still repel each other electrostatically. This is why heavier nuclei tend to have more neutrons in relation to the number of protons - the neutrons only attract. The binding energy per nucleon, a measure of how tightly a nucleus is bound, peaks at about 60 nucleons. (There's also a sharp peak at 4; the alpha particle, which is a He-4 nucleus, is very tightly bound) So light nuclei require energy to split apart and would release energy only if you can fuse them together. You might expect that anything heavier than 120 nucleons would fission, but these nuclei are still bound together, so the two parts you would get in fission aren't likely to fly apart. It's not until you get into the elements heavier than lead (all of which are radioactive) that you find nuclei whose binding energy per nucleon is low enough that the fission fragments could tunnel apart. Tom Swanson, Ph.D., Physicist, US Naval Observatory Check out this graph: What this shows is that Iron is essentially the most stable element. Everything lighter than Iron can be fused and release some energy. Everything heavier than iron can be fissioned and release some energy. See the answer is that you theoretically could split Aluminum up into lighter elements but it'd cost you a huge amount of energy to do so. Uranium can be split with a net energy gain. Jason Fahrion, B.S., Lab Technician, Portland, Or. If you were to weigh the protons and neutrons that make up an atomic nucleus individually, you would see that the nucleus weighs less than its individual components. For example, carbon 12 has a mass of exactly 12 amu, but it is made up of six protons (1.0073 amu each) and 6 neutrons (1.0087 amu each). This missing energy, called the binding energy for the carbon 12 nucleus, is released when the nucleus is formed. You can calculate how much energy is released using Einstein's e=mc2 to convert the missing mass (0.096 amu, where 1 amu = 1.66 x 10-27 kg) to energy (89 MeV, or 89 million electron volts). So, in order to break up a carbon 12 nucleus into individual protons and neutrons, it takes an infusion of 89 million electron volts of energy. For comparison, in order to break apart a hydrogen atom into a proton and an electron, it only takes 13.6 electron volts, so the nucleus is bound over a million times more strongly! A plot of binding energy for a given atomic weight of the nucleus can be found at http://encarta.msn.com/media_461531006_761558960_-1_1/Nuclear_Binding_Energy.html. In the case of carbon 12, each individual nucleon (proton or neutron) is bound by 89 MeV / 12, or around 7.45 MeV. As the mass number increases, the binding energy also increases, which means that light elements can release additional energy through fusion. The maximum binding energy per nucleon occurs for Iron 56, and for heavier elements the binding energy decreases as the mass number gets larger. Therefore, a very heavy nucleus such a uranium 238 nucleus can produce additional energy not through fusion, but rather through fission that divides it into two lighter and more tightly bound nuclei. So to answer your question, uranium can naturally undergo fission because fission produces energy, but aluminum is too light to undergo fission. However, at the center of some stars conditions are right for elements such as carbon, oxygen, silicon, etc. to undergo fusion and to continue to produce energy until iron is produced and no more energy can be produced through either fusion or fission. Finally, I should mention that the binding energy curve is much steeper when increasing for light elements than when decreasing for heavy elements. As a result, while fission releases an immense amount of energy (as evidenced by power plants and atomic weaponry), fusion releases far more energy, which is why researchers are actively working on methods to harness that energy for power production. Charles Steinhardt, B.A., Astronomy Grad Student, Harvard University 'Our loyalties are to the species and the planet. We speak for Earth. Our obligation to survive is owed not just to ourselves but also to that Cosmos, ancient and vast, from which we spring.'
When we think of recycling, we think of it as a modern movement. But it might surprise you to learn that ancient humans recycled all the time according to new research by Tel Aviv University at the Qesem Cave. Discovered during road construction on Israel’s coastal plain, the Qesem Cave has been a Paleolithic archaeological site since 2000. Humans occupied the cave 400,000 years ago and left 200,000 years later. So, for the last 200,000 years, the cave has been a treasure trove of human history waiting to be rediscovered. “The rich Acheulo-Yabrudian deposits at Qesem Cave offer a rare opportunity to study human adaptation and evolution in the Pleistocene,” lead excavator and professor Ryan Barkai said in 2003, three years after the cave’s discovery. “Because the dates indicate that human activity occurred mostly before 382 kyr, and because the site is located within the ‘out-of-Africa’ corridor, the information obtained by a study of Qesem Cave is likely to contribute substantially to our understanding of the origins and dispersal of modern humans.” Indeed, the site is also shedding more light on how humans lived. And the latest revelation is that early humans regularly engaged in recycling. — Archaeology Magazine (@archaeologymag) May 30, 2019 As it turns out, these ancient humans made tools out of flint, a flaky stone that can be shaped into arrowheads and other items humans needed to survive. And when these tools broke, our ancestors did not just throw them away and start over with a new piece. Instead, the used the stone again for smaller tools. “Recycling was a way of life for these people,” Barkai said according to Science Daily. “It has long been a part of human evolution and culture. Now, for the first time, we are discovering the specific uses of the recycled ‘tool kit’ at Qesem Cave.” “We used microscopic and chemical analyses to discover that these small and sharp recycled tools were specifically produced to process animal resources like meat, hide, fat and bones,” team member Dr. Flavia Venditti explained. “We also found evidence of plant and tuber processing, which demonstrated that they were also part of the hominids’ diet and subsistence strategies.” “The meticulous analysis we conducted allowed us to demonstrate that the small recycled flakes were used in tandem with other types of utensils. They therefore constituted a larger, more diversified tool kit in which each tool was designed for specific objectives,” she continued. So, humans living there relied on recycling and didn’t throw perfectly good resources away, unlike humans today who waste resources on a daily basis. Recycling by ancient humans has been studied for years, but the research at the Qesem Cave confirms the practice, meaning we have a lot to live up to because our ancestors knew how to do what we are reluctant to do today, even though the future of our planet is on the line. In addition, ancient humans used every part of the animals they hunted and engaged in food sharing to make sure everyone got a bite to eat. “The research also demonstrates that the Qesem inhabitants practiced various activities in different parts of the cave,” Venditti says. “The fireplace and the area surrounding it were eventually a central area of activity devoted to the consumption of the hunted animal and collected vegetal resources, while the so-called ‘shelf area’ was used to process animal and vegetal materials to obtain different by-products.” “These hominins hunted cooperatively, and consumption of the highest quality parts of large prey was delayed until the food could be moved to the cave and processed with the aid of blade cutting tools and fire,” Mary Stiner wrote in an article published by the National Academy of Sciences. “Delayed consumption of high-quality body parts implies that the meat was shared with other members of the group. Although not the earliest record of fire as technology in the Levant, Qesem Cave preserves contextual information about cooking and marrow extraction during the late Lower Paleolithic.” — Haaretz.com (@haaretzcom) February 1, 2016 Professors Barkai and Venditti also point out that early humans did not recycle flint because it was scarce, they did it because they wanted to and it resulted in additional types of tools. “This research highlights two debated topics in the field of Paleolithic archaeology: the meaning of recycling and the functional role of small tools,” Barkai said. “The data from the unique, well-preserved and investigated Qesem Cave serve to enrich the discussion of these phenomena in the scientific community.” “Our data shows that lithic recycling at Qesem Cave was not occasional and not provoked by the scarcity of flint,” Venditti added. “On the contrary, it was a conscious behavior which allowed early humans to quickly obtain tiny sharp tools to be used in tasks where precision and accuracy were essential.” Let that be a lesson to all of us that by recycling, we are preserving our resources and helping ourselves at the same time. To learn more about ancient human recycling, here’s a video via YouTube. Featured Image: Screenshot
More often than not, when you see a headline proclaiming the invention of an invisibility cloak, it’s all just smoke and mirrors. A recent experiment reported in the journal Science isn’t necessarily the magical cloaking device humanity has been dreaming about for generations, but it’s much closer to reality than previous attempts. While still a laboratory proof of concept, this new experiment uses common, off-the-shelf materials and easier-to-reproduce conditions to create the most reliable cloaking effect yet seen. What is unique here, is that the materials and conditions at work are not anything terribly special; most scientific facilities around the world can build the cloak without any expensive exotic materials. Additionally, many experiments that purport to demonstrate cloaking effects are done in strenuously controlled laboratory conditions on a microscopic scale. Researchers in this case needed only a superconducting tube 12.5mm long, low temperatures, and a stable magnetic field. In the experiment, the tube in question with its inside surface coated in a superconducting tape and its outside made of a magnetic alloy, was cooled to 77 degrees above absolute zero; that’s 77 Kelvin for the scientifically inclined, or -196C. When a magnetic field was applied to the tube, the magnetically active outer layer attracted the field, but at the same time the superconductive inner layer repelled it. The result is that the magnetic field is kept entirely uniform at both ends of the tube. That is, the tube and anything inside it are magnetically cloaked from an outside observer. What was accomplished in this experiment is not what we think of as “Star Trek” cloaking because it was only manipulating a magnetic field. Visible light is a small part of the electromagnetic spectrum, but the principal could be essentially the same. A cloaking device would simply have to make the electromagnetic profile in front of you look the same as behind you. Do that, and you’re invisible. Of course, that also means you wouldn’t be able to see out of the cloak, but science can cross that bridge when we come to it. When you look at how far we still have to go, it might seem like we’ll never see that seemingly magical device that makes things invisible with the flip of a switch. Though, this experiment is a real step forward in that respect. Previous efforts required researchers to maintain conditions much closer to absolute zero, for example. It is believed that the same principles at work in this experiment could, someday, be used to manipulate an electromagnetic field like visible light in the same way at room temperature. This scenario is still not entirely realistic or ready for any kind of general deployment. The magnetic field used was kept uniform in order for researchers to discern the effects, but there are definitely applications in laboratory conditions for this magnetic cloak. You’re not going to be the invisible man any time soon, but this is still an interesting new method that could be improved on in the future thanks to some killer science. Read more at Science (paywalled)
This week learners can brainstorm game ideas and test them out with family and friends. Games can be prototyped with paper, clay, cardboard, maker equipment, and/or craft supplies. When I do this with classes, we often play or analyze games that we love prior to designing our own. This allows learners to incorporate aspects that work and exclude things they don’t like. Some rankings from students have been on ease of setup, how long it plays, how long it takes to learn, balance of strategy/chance, fun factor, and uniqueness. Once learners are ready for their own game design, you can encourage them with the following prompts: - Does your game have a theme or story? - (Sometimes a theme or story can engage different sets of users.) - Is your game competitive or collaborative? - Do you want to work together or separate? - Is your game going to be more strategy or chance? - How can you add elements of strategy and/or chance? - What does the set-up look like for the game? - Does it take a long time, or is it easy? - How does a player take a turn? - What is the algorithm for turns? - What is the goal of the game/how do you win? - How many players can play without making it take to long? - Is there a way to change how the game plays each time? - How can you add variety to game play? Items that learners may want to include in their game: pieces, board, box, instructions, dice, cards, tokens, etc.
Electrical connectors are an electro-mechanical device used to join electrical circuits as an interface. For this, engineers use a mechanical assembly. These connectors are comprised of jacks, which have a female end and plugs with a male end. What makes electrical connectors so important is that they can be used for both temporary and permanent solutions, including using an adaptor to bring dissimilar connectors together. When it comes to electrical connectors, there are literally hundreds of different types. Within the world of computing, electrical connectors, also referred to as physical interfaces, are commonly used. While cable connectors are connected to devices with wires, electrical connectors connect electrically. Because there are so many different end-products, electrical connectors have unique roles. Following are some examples of the electrical connectors used most frequently. - 8P8C—Using the acronym for “eight positions/eight conductors,” these electrical connectors are modular, complete with eight positions that all contain conductors. Although there are many uses, these connectors are most recognized in CAT5 and Ethernet cables. Although these connectors resemble RJ45 cables used for landline telephones, the socket into which the end of the connector fits is different. - D-Subminiature—These electrical connectors are found on IBM-compatible computers and certain modem ports. Although primarily used for testing, computers, and telecommunications, D-subminiature electrical connectors come in variants, some with solid machined contacts, crimp and PCB mounts, thermocouple contact options, and so on. - USB—Using the acronym for Universal Serial Bus, this connector is standard to interface devices. Although commonly used in the manufacturing of Mac, Apple, and PCs, these electrical connectors come in different types and serve different purposes. - Power—More commonly referred to as AC power plugs/sockets and DC connectors, there are several types of this sort of connector. For example, these also encompass industrial and multiphase power plugs/sockets, as well as NEMA connectors. The primary purpose of these electrical connectors is to prevent people from being shocked accidentally if they come into contact with energized conductors. Included with power connectors are safety ground connections and power conductors. - Radio Frequency—Another common use for electrical connectors is the radio frequency (RF) connector. Used at radio frequencies, it is essential that these connectors do not change the transmission line’s impedance.
Changes in Gulf Stream could chill Europe New data on global warming; thinning polar ice cap By Marsha Walton The orange represents warm ocean surface temperatures and the blue cool temperatures in the Gulf Stream. THE GULF STREAM The Gulf Stream is a pattern of warm water extending from the Gulf of Mexico to the British Isles. It is responsible for the mild climate of Western Europe, which is at a much higher latitude than most of New England, but experiences much milder weather. Wind patterns over the ocean pull the warm water from the Gulf into the Northeast Atlantic. (CNN) -- One outcome of global warming could be a dramatic cooling of Britain and northern Europe. Scientists now have evidence that changes are occurring in the Gulf Stream, the warm and powerful ocean current that tempers the western European climate. Without the influence of the Gulf Stream and its two northern branches, the North Atlantic Drift and the Canary Current, the weather in Britain could be more like that of Siberia, which shares the same latitude. Cambridge University ocean physics professor Peter Wadhams points to changes in the waters of the Greenland Sea. Historically, large columns of very cold, dense water in the Greenland Sea, known as "chimneys," sink from the surface of the ocean to about 9,000 feet below to the seabed. As that water sinks, it interacts with the warm Gulf Stream current flowing from the south. But Wadhams says the number of these "chimneys" has dropped from about a dozen to just two. That is causing a weakening of the Gulf Stream, which could mean less heat reaching northern Europe. The activity in the Greenland Sea is part of a global pattern of ocean movement, known as thermohaline circulation, or more commonly the "global conveyor belt." Wadhams presented his findings at a meeting of the European Geosciences Union in Vienna, Austria, last month. When Wadhams began his studies of Arctic Sea ice more than 30 years ago, there was not a focus on a warming of the region or the ice becoming thinner. His research aboard British Royal Navy submarines began as a way to use new tools, such as sonar, to study this harsh region of the planet. "Initially the idea was just to map what the ice thickness distribution was," Wadhams said. "You cannot measure it with satellites, and to drill through it is difficult," he said. But year after year, a dramatic pattern emerged. "We discovered the ice was getting rapidly thinner. It has thinned by 40 percent in the past 20 years," said Wadhams. Wadhams and other scientists say the slowing of the Gulf Stream could contribute to other severe effects on the planet, such as the complete melting of the Arctic ice cap in the summer months. That could eliminate the habitat and lead to the extinction of Arctic wildlife, including the polar bear. Current predictions indicate that could happen as early as 2020 or as late as 2080. Scientists are getting other information about the disappearing ice cap from Alaskan Inuits. They report changes in where and when certain species of fish have been found, and in populations of seals and polar bears. Other oceanographers stress that Wadhams' findings are one piece of a very complex earthly puzzle. Terrence Joyce, senior scientist in the department of physical oceanography at Woods Hole Oceanographic Institution in Massachusetts, says it's important not to get alarmist, but instead to keep up a wide array of research. For a dramatic climate change to take place, "A whole bunch of pieces have to fit together. Certainly this is one of them. We need to keep paying attention, and people are doing that," he said. Woods Hole is conducting research that measures the path and temperature of some parts of the Gulf Stream. Such a dramatic climate change would not take place in five days, but rather several years, said Joyce.
If tree-planting programmes work as advertised, they could buy precious time for the world to reduce its reliance on fossil fuels and replace them with cleaner sources of energy. One widely cited 2017 study estimated that forests and other ecosystems could provide more than one-third of the total CO2 reductions required to keep global warming below 2 °C through to 2030. Although the analysis relies on big assumptions, such as the availability of funding mechanisms and political will, its authors say that forests can be an important stopgap while the world tackles the main source of carbon emissions: the burning of fossil fuels. “This is a rope that nature is throwing us,” says Peter Ellis, a forest-carbon scientist at The Nature Conservancy in Arlington, Virginia, and one of the paper’s authors. Forests can reduce Earth’s surface albedo, meaning that the planet reflects less incoming sunlight back into space, leading to warming. This effect is especially pronounced at higher latitudes and in mountainous or dry regions, where slower-growing coniferous trees with dark leaves cover light-coloured ground or snow that would otherwise reflect sunlight. Most scientists agree, however, that tropical forests are clear climate coolers: trees there grow relatively fast and transpire massive amounts of water that forms clouds, two effects that help to cool the climate. (source: Nature.com) What can you do to reforest the planet? Plant trees anywhere you can - in your garden or in your balcony, use natural eco-friendly products, contribute to environmental organisations that plant trees where needed.
We all know that germs can spread a cold or flu and that it is important to wash your hands to reduce your exposure. However, did you know that cavities, gum disease, and other infections are also spread through saliva? Saliva does provide quite a bit of protection. It flushes germs away and contains antibodies, antimicrobial proteins, enzymes, and normal mouth flora (good bacteria) that help protect you and decrease the risk of sharing germs. However, it is not able to eliminate every risk of contagion. Many illnesses are not spread through saliva, but quite a few are. Viruses that cause the flu, colds, mononucleosis, and hand, foot, and mouth disease are easily spread through saliva. Any virus in the herpes family is as well, and chickenpox and cold sores are both a herpes virus. Some bacteria are also spread through saliva. One example is streptococcus. Although some strains of strep are beneficial, others can cause strep throat or gingivitis. Oral bacteria, good or bad? Everyone has bacterial colonies throughout their bodies. We have 10 times more bacterial cells in our bodies than human cells, and a majority of these are beneficial to us. However, when the balance is upset and harmful bacteria colonies thrive, it can negatively impact our health. An important example of this is in the mouth. 1,000 different types of bacteria can thrive in the human mouth. Someone with excellent oral hygiene will have between 1,000 and 100,000 bacteria living on each tooth, while a mouth that is poorly cared for can contain 1 million bacteria on each tooth! So, it’s not a matter of keeping the bacteria out of your mouth but of helping the beneficial strains to thrive. What can happen when harmful bacteria thrive in your mouth. Certain bacterial strains produce acids that attack teeth. The bacteria, acids, food particles, and saliva combine to form plaque on your tooth enamel which eventually causes tooth decay. As the plaque accumulates it becomes tartar, which is more difficult to remove and helps cavities to form more easily. Unlike cold sores, which are contagious, you cannot get canker sores from someone else. However, research shows that people who get recurring canker sores also have a much higher concentration of two certain strains of bacteria in their oral microbiota than in the mouths of people who do not get canker sores (1). Bacteria also causes gingivitis, which is the early stage of gum disease when gums become inflamed and bleed easily. If left untreated, gum disease can cause serious health problems when the oral bacteria enter the bloodstream, travel, and multiply elsewhere in the body. Infective endocarditis, stroke, and lung infections are a few of the complications that can develop in people who are at risk. This means that, although these are not infectious diseases, tooth decay, gum disease, and other possible complications can be a result of acquiring the harmful bacteria from someone else. A good reason to keep your saliva away from your baby. A group of scientists in the Netherlands studied how the oral microbiota (microorganisms of the mouth) are impacted by intimate kissing (2). After a 10 second kiss, the average couple had transferred 80 million bacteria! They also found another interesting piece of information. People who kissed and were not a couple had very dissimilar oral microbiota, while couples had a much closer composition of oral bacteria. Obviously, the frequency of kissing impacted the similarity of oral bacteria, but couples who rarely kissed also had similar microbiota on their tongues. This indicates that the long-term effect of living together impacts your oral bacterial balance. Have you ever heard someone say that their family is prone to cavities? This may be partially due to oral hygiene practices but, even with excellent oral hygiene, cavities may still form if your bacterial balance contains large amounts of harmful bacteria. Since merely living together produced similarities in the oral bacteria of couples, it makes sense that it will also impact your children. This idea is supported by scientific research. Infants are born with a minimal number of different strains of bacteria and their unique balance of oral bacteria is formed in the first few years of life. By the age of three, many children have formed an oral microbiome that determines their susceptibility to cavity formation (3). Main sources of bacteria that colonize a baby’s mouth come from the mother’s microbiota, breast milk, the skin of caretakers and siblings, and other foods. It is also impacted by exposure to other people’s saliva, which babies are particularly susceptible to because their oral microbiome is still forming. This means that if a parent has gingivitis or cavities, and their saliva makes its way into the mouth of the baby or toddler, the child is more likely to develop colonies of harmful bacteria leading to a higher chance of forming cavities and gingivitis. You can help your family gain a healthy oral microbiome by providing plenty of prebiotics, probiotics, and healthy foods. A diet high in fresh fruits and vegetables, dairy, and fermented foods will help the beneficial bacteria in your mouth to thrive. Practicing excellent daily oral hygiene is another important step. Bacterial plaque starts to form within a few hours of being removed, so brushing twice and flossing once every day is essential for a healthy mouth and teeth. How can you protect your family from sharing harmful germs in saliva? Protecting your family from germs involves more than regularly washing your hands and coughing or sneezing into a tissue. Since saliva transmits infectious diseases as well as harmful bacteria, it is important to take steps to minimize exposure to another person’s saliva. Try not to share food or drinks. It is easy to lick dripping ice cream then give it back to your toddler, hand food to a child after taking a bite or share a water bottle on a family outing. Although it takes a bit more effort to remember, it is best to avoid these things even if everyone is healthy. It is also best to avoid kissing children on the mouth. These precautions are especially important with babies and toddlers since their oral microbiome is still forming and more susceptible to harmful bacteria that cause cavities and gingivitis. What if you go out of town and forget someone’s toothbrush? Isn’t it important to brush your teeth even if it means you need to share with a family member? The American Dental Association says that you shouldn’t share a toothbrush. Not only will their toothbrush bring foreign bacteria into your mouth, but brushing teeth can cause microtrauma, allowing the harmful bacteria to enter your bloodstream. If at all possible, buy a toothbrush, and if it isn’t possible to buy one then you can use your finger. As you can see, there are many reasons to keep your germs to yourself. Although you can’t protect your family from every harmful virus or bacteria, you can take these steps to keep them, and their oral bacterial balance, healthy and thriving. Call us at (480) 759-1119 At Jungle Roots Children’s Dentistry & Orthodontics, we strive to provide the highest comprehensive pediatric and orthodontic dental care in a unique, fun-filled environment staffed by a team of caring, energetic professionals. We believe the establishment of a “dental home” at an early age is the key to a lifetime of positive visits to the dentist. Ahwatukee, Phoenix #germs #dentalhealth #children #kids #ortho #orthodontics #orthodontic #jungleroots #pediatricdentistry #health #dentalcare #dentist #oralhealth #dentistry #dental #wellness #selfcare #wellbeing #arizona #phoenix #chandler #ahwatukee 1. Bankvall, M., Sjöberg, F., Gale, G., Wold, A., Jontell, M., & Östman, S. (2014). The oral microbiota of patients with recurrent aphthous stomatitis. Journal of oral microbiology, 6, 25739. doi:10.3402/jom.v6.25739 2. Kort, R., Caspers, M., van de Graaf, A., van Egmond, W., Keijser, B., & Roeselers, G. (2014). Shaping the oral microbiota through intimate kissing. Microbiome, 2, 41. doi:10.1186/2049-2618-2-41 3. Lif Holgerson, P., Öhman, C., Rönnlund, A., & Johansson, I. (2015). Maturation of Oral Microbiota in Children with or without Dental Caries. PloS one, 10(5), e0128534. doi:10.1371/journal.pone.0128534
Vienna/Clausthal (TU). The smaller the transistors, the faster they can operate. As a result, faster and faster processors can also be designed. The function of a transistor requires the presence of a thin insulating layer, the gate oxide. In only a few years, the thickness of this layer will be only one fifty-thousandth of that of a human hair. With continuing use of silicon dioxide as gate oxide, however, further miniaturisation of transistors - and thus the manufacture of even faster chips - will no longer be possible in a few years. Scientists all over the world have been racking their brains for years over the problem of further miniaturising transistors. Although the solution sounds simple, its realisation is quite formidable: a new material must be found. If silicon dioxide - generally known as window glass - has a thickness of only a few atomic layers, it loses its insulating property. A kind of short circuit thus occurs in the transistor. The required material must therefore allow the application of a layer which is thicker and thus acts as an insulator, but which otherwise behaves as though it were an ultra-thin layer of silicon dioxide. After all, the objective is to design and manufacture transistors which are even smaller and more efficient. Strontium titanate has hitherto proved to be the most promising candidate for the purpose. However, only the "recipe" was previously known, not the combined effects of the ingredients. This knowledge deficit was a barrier to continuing development to achieve the set objective. The team of researchers from Vienna and Clausthal has now succeeded for the first time in determining precisely these combined effects. By means of computer simulations, they can explain the process of forming the oxide layer and thus indicate how their electrical properties can be controlled. The scientific results achieved by Clemens J. Först, Karlheinz Schwarz - both at TU Vienna - as well as Christopher R. Ashman and Peter E. Blöchl at TU Clausthal have been published in the current issue of "Nature" (Nature 427, 53 (2004)). The article is entitled "The interface between silicon and a high -k oxide". "Computer simulations shed some light into atomic dimensions, where one would otherwise be almost blind," explains Prof. Blöchl from TU Clausthal. By means of computer simulations, the team of researchers has succeeded in explaining, atom for atom, how a new gate oxide - that is, strontium titanate - can be applied to a silicon wafer. "One can imagine the composite of silicon and strontium titanate as two Lego building blocks positioned one over the other", says Clemens Först from TU Vienna in explaining the essential result. The surfaces of solids exhibit a characteristic atomic and electronic pattern which is governed by the arrangement of the atoms. The charge pattern of the oxide layer, which is comparable with the plug-in pattern of a Lego building block, matches the pattern of the silicon surface saturated with strontium. For the researchers in Vienna and Clausthal, the conclusions concerning the electrical properties are also promising for the future. The oxide acts as a barrier to electrons and can thus be compared with a dam which holds back water. The higher the barrier is, the better the insulating properties are. For the first time, the scientists have demonstrated that the height of the barrier can be decisively increased by chemical processes at the interface. The properties of the gate oxide can thus be adapted to satisfy technological requirements. The research work has been performed within the scope of the International Research Consortium - Integration of very high-k dielectrics with silicon CMOS technology (INVEST). The project is supported by the 5th General Program for Technology of the Information Society (IST) of the European Commission. Address enquiries to: Mag. Clemens Först Institut für Materialchemie Technische Universität Wien Private : 43-650-9175878 Prof. Dr. Peter E. Blöchl Institut für Theoretische Physik Technische Universität Clausthal Tel. 49-532-372-2021 49-532-372-2555 (Sekretariat) Private : 49-5321-398937
Stonehenge A Neolithic Beacon: Design Imitates Star When lit from within with one large fire or multiple fires, and observed from above, the standing stones that comprise Stonehenge split the light rays creating a radiating effect similar in appearance to a star. Neolithic People did not know what stars were and may have related to them as fires in the sky. Stonehenge may have been created by the Neolithic to call attention to their existence and position, with their own star shaped fire, big enough to be seen by perceived celestial neighbors. Contemporary civilizations do the same with satellites. Additionally, the ringing of the stones may have also served to catch attention through sound. This explains the size of Stonehenge, gives purpose to the lintels, and accounts for the foreign remains at the site, as a beacon that large would indeed attract curious people from nearby lands. The fires would also provide opportunity to cremate remains and incorporate ritual. The lunar based motifs present in their art support a preoccupation with the sky. Attachment: Screenshot_20181004-132538.png (392 KB)
Indian cuisine: – The cuisine of India is one of the world’s most diverse cuisines, characterized by its sophisticated and subtle use of the many spices, vegetables, grains and fruits grown across India. India’s religious beliefs and culture have played an influential role in the evolution of its cuisine. Vegetarianism is widely practiced in many Hindu, Buddhist and Jain communities. Each geographical region has contributed wide assortment of dishes and cooking techniques. Religion and climate are two factors that have significantly impacted the development of cooking styles and food habits in India. History and religious and foreign influences influences and climate impact:- Extensive immigration and intermingling of cultures through many millennia has introduced many dietary and cultural influences. India’s diverse climate, ranging from deep tropical to alpine, has made a broad range of ingredients readily available to its many schools of cookery. Over 80% of Indians follow the Hindu religion and its offshoots such as Jainism. Hinduism prescribes respect for life forms and has contributed to the prevalence of vegetarianism in India, particularly in the North. One impact of this on cuisine is that lentils and beans are the main sources of protein as opposed to fish and meat. Although cows are sacred to Hindus, milk is considered auspicious and milk products such as curd, vegan cottage cheese and sweets made of milk solids are part of the cuisine. Spices are generously used to provide variety in the vegetarian diet. Certain sects of Hinduism forbid the use of onions and garlic in food, and so substitute flavorings such as cumin seeds, ginger, and cashew paste have been incorporated into the cuisine. In many cases, food has become a marker of religious and social identity, with various taboos and preferences (for instance, a segment of the Jain population consume no roots or subterranean vegetable) that have driven certain groups to innovate extensively with the food sources that are deemed acceptable. The longstanding vegetarianism within sections of India’s Hindu, Buddhist and Jain communities has exerted a strong influence over Indian cuisine. Many recipes first emerged during the initial Vedic period, when India was still heavily forested and agriculture was complemented with game hunting and products from the forest. Later invasions from Central Asia, Arab, the Mughal empire, and Persia, had a fundamental effect on Indian cooking. The main difference from traditional Hindu cuisine was the use of meat and fish. West and Central Asian cooking techniques and ingredients (such as the use of dates and nuts in rice dishes, and grilling of meat into ‘kebabs’) were adopted. Muslim rulers were great gourmets, famous for their lavish courts and elaborate meal rituals and many of the dishes they patronized are today part of the Indian gourmet heritage. The Islamic conquest of medieval India also introduced such fruits as apricots, melons, peaches and plums and rich gravies, pilafs and non-vegetarian fare such as kebabs, giving rise to Mughlai cuisine (Mughal in origin). The Christian tradition in India is as old as Christianity itself, with St. Thomas entered India. Later, the Portuguese and British accelerated the growth of Christianity. Christians also ate meat and fish, but developed their own cooking techniques. In Kerala, where Christianity took root over time and in tandem with local culture, food incorporates many local ingredients and cooking techniques and has few European influences. In Goa and Calcutta, where Christianity came with the British and Portuguese and conversion happened more rapidly, food reflects European customs and traditions (for example, rum-flavored cake is a traditional favorite at Christmas in Calcutta) One key difference in cuisine linked to climate is the type of staple cereal consumed. Wheat dominates in the North Indian diet, whilst rice is the key cereal in South India. North India is famous for its many varieties of wheat breads. ‘Rotis’, ‘naans’, ‘paranthas’, and pooris are but a few of the many varieties available, distinguished by the type of wheat flour (whole or refined), method of cooking (fried, cooked on a griddle, or baked in a clay oven), shape and size (single layered, multiple layered, large or small) and whether plain or stuffed with vegetables. South India has innovated in rice preparations. The staples of Indian cuisine are rice, whole wheat flour and a variety of pulses, the most important of which are red lentil, bengal gram, pigeon pea or yellow gram, black gram and green gram. Pulses may be used whole, dehusked, for example dhuli moong or dhuli urad, or split. Pulses are used extensively in the form of dal (split). Some of the pulses like chana and “Mung” are also processed into flour . Most Indian curries are fried in vegetable oil. In North and West India, groundnut oil has traditionally been most popular for frying, while in Eastern India, mustard oil is more commonly used. In South India, coconut oil and sesame oil are common. In recent decades, sunflower oil and soybean oil have gained popularity all over India. Hydrogenated vegetable oil, known as Vanaspati ghee, is also a popular cooking medium that replaces Indian ghee (clarified butter). The most important and most frequently used spices in Indian cuisine are chilli pepper, black mustard seed, cumin, turmeric, fenugreek, asafetida, ginger and garlic. Popular spice mixes are garam masala which is usually a powder of five or more dried spices, commonly comprised of cardamom, cinnamon and clove. Every region has its own blend of Garam Masala. Some leaves like cassia leaf, coriander leaf, fenugreek leaf and mint leaf are commonly used. The use of curry leaves is typical of all South Indian cuisine. In sweet dishes, cardamom, nutmeg, saffron, and rose petal essence are used. watch for these recipes soon! - Kasundi Chicken - Vegetable Jhalfraizee - Andhra Brinjal Pachadi - Dates and Rice Kheer - Dahi kebab - Jain Noodle Cutlet - Mangodi Cabbage - Peethiwali Puri with Aloo Chana - Pumpkin and Pineapple Crumble - Tamatar Khatta
Researchers have discovered a new kind of "Godzilla" planet due to its massive size that's 560 light years from Earth. It's called Kepler-10c and the planet is 17 times more massive than our planet, and has been named a 'mega-Earth'. This is a notable discovery because astronomers previously didn't believe such a world could exist because they believed anything so large would grab hydrogen gas as it grew and become a gaseous 'mini-Neptune'. Continue reading for a video and more information. According to The Daily Mail, "But to their surprise, Kepler-10c has been able to remain solid despite being more than twice as old as the Earth. The discovery suggests that potentially life-bearing rocky planets may be far more common than first thought, and some could be extremely ancient. The Kepler-10 star system is thought to be 11 billion years old, meaning it formed less than three billion years after the birth to the universe. Earth, by comparison, is only around 4.5 billion years old."
We love this 5th grade question for decimals: If 8 apples cost $4.16 and 13 lemons cost as much as 5 apples, what is the cost of 8 lemons? First, here’s our suggested solution. Why do we like this question? It requires students to break apart the problem into 3 or 4 separate steps, but more importantly, it provides a chance to use two very important skills in problem solving. In the first step of the problem, we need to find out how much one apple cost so that we can find out how much 5 would cost. By splitting up 4.16 into 4.00 + 0.16, and operating on (dividing by 8) them separately, and then adding the results back to get the final answer, students rely on the distributive property of division. This greatly simplifies the calculation involved and builds up on their mental math. In the second part of the problem, we need to find out how much 1 lemon costs in order to figure out how much 8 of them would cost. To do this decimal division by standard algorithm is rather tedious and prone to errors, but if we recognize that two 13’s go into 26, i.e. 26 divided by 13 is 2, and relying on our number sense, we can guess the answer is 0.2 without working out the answer explicitly. Hence, a question does not have to be wordy or complex. A short and simple question like this can exercise the three aspects of rigor fully — conceptual understanding , procedure fluency and application. Subscribe to get a free worksheet on Number Sense for Decimals and our latest content by email.
Princeton University researchers discovered that the bacteria Vibrio cholerae keeps food generated by the community's productive members away from those of their kind that attempt to live on others' leftovers. The bacteria use two mechanisms that are likely common among bacteria. In some instances, the natural flow of fluids over the surface of bacterial communities can wash away excess food before the freeloaders can indulge. In microscope images, shiftless V. cholerae (red) were in abundance under conditions of no fluid flow (left image). When the bacteria were grown in an environment with fluid flow -- similar to that found in nature -- cooperative V. cholerae (yellow) won out (right image).
- About AmSAT - Find a Teacher - Alexander Technique - Teacher Training - Classes & Events What happens in a lesson? In an Alexander Technique lesson, your teacher instructs you — with verbal and manual guidance — to approach movement differently. You will learn to recognize habit patterns that may be interfering with ease and flexibility and you’ll learn how to discontinue them. No special clothing needed - normal street attire is appropriate. There are two parts to a lesson: To more easily experience the body’s muscles in a neutral state, part of the lesson takes place lying down (fully clothed) on a lightly padded table - on your back with your knees bent. Your teacher will teach you how to recognize and release any unnecessary tension you may be holding, promoting an enlivened sensory awareness and quieting the nervous system. You are an active participant: your eyes are open and conversation takes place. Guidance during activity Using simple activities such as sitting, standing, walking, speaking and reaching, your teacher gives you verbal, visual and physical cues to help you perform those activities with greater ease and efficiency. Guiding you in movement, your teacher will elicit your body's capacity for dynamic expansion and you will learn how to maintain that ease and freedom on your own. What you learn applies to all activities in your life, but you are welcome to work with your teacher on particular activities of interest such as lifting and carrying, computer work, public speaking, your favorite sport or even sleeping position. Actors may choose to work on a monologue, singers an aria, violinists a challenging passage, dancers a movement. In any activity you bring to a lesson - swinging a tennis racket, lifting a child or sitting in front of a computer - you learn to apply the principles of the Alexander Technique to reduce compression and increase overall ease and proficiency.
Number 3,817.61 written in: 'lowercase', 'UPPERCASE', 'Title Case', 'Sentence case', 'Start Case', 'camelCase', 'hyphen-case' and 'snake_case'. Notes on Letter Cases used to write out in words the number above: 1: Lowercase: only lowercase letters are used. Example: 'seventy-six and two tenths'. 2: Uppercase: only uppercase letters are used. Example: 'SEVENTY-SIX AND TWO TENTHS'. 3. Title Case: first letter of each word is capitalized, except for certain short words, such as articles, conjunctions and short prepositions, 'a', 'an', 'the', 'and', 'but', 'for', 'at', 'by', 'to', 'or', 'in', etc. Example: 'Seventy-Six and Two Tenths'. 4. Sentence case: only the first letter of the first word is capitalized. Example: 'Seventy-six and two tenths'. 5. Start Case: first letter of each word is capitalized without exception. Example: 'Seventy-Six And Two Tenths'. 6. Camel Case: text has no spaces nor punctuation and first letter of each word is capitalized except for the very first letter in the series. Example: 'seventySixAndTwoTenths'. Pascal Case: See the Camel Case above, but the first letter is also capitalized. Example: 'SeventySixAndTwoTenths'. 7. Hyphen Case: text has no spaces nor punctuation and the words are delimited by hyphen. Example: 'seventy-six-and-two-tenths'. Hyphen Case can be lowercase or uppercase. 8. Snake Case: text has no spaces nor punctuation and the words are delimited by underscore. Example: 'seventy_six_and_two_tenths'. Snake Case can be lowercase or uppercase. Notes on Writing Out Numbers: 1: It's correct to hyphenate all compound numbers from twenty-one (21) through ninety-nine (99). The hyphen is the minus sign, as in 'thirty-four' (34). 2: In American English, unlike British English, when writing out natural numbers of three or more digits, the word 'and' is not used after 'hundred' or 'thousand': so it is 'one thousand two hundred thirty-four' and not 'one thousand two hundred and thirty-four'. 3. Do not use commas when writing out in words numbers above 999: so it is 'one thousand two hundred thirty-four' and not 'one thousand, two hundred thirty-four'. 4. Use commas when writing in digits numbers above 999: 1,234; 43,290, etc. How to write out numbers in words in (US) American English 1. How to convert natural numbers (positive integers) to (US) American English words, how to write them out (spell them out)? 1.1. To know how to write a number in words we must know the place value of each digit. For example, the number 12,345 has a 1 in the ten thousands place, a 2 in the thousands place, a 3 in the hundreds place, a 4 in the tens place and a 5 in the ones place. 12,345 in words = = one ten thousands (10,000) + two thousands (2,000) + three hundreds (300) + four tens (40) + five ones = ten thousands (10,000) + two thousands (2,000) + three hundreds (300) + four tens (40) + five ones = ten thousand + two thousand + three hundred + forty + five = (ten + two) thousand + three hundred + forty-five = twelve thousand + three hundred + forty-five = twelve thousand three hundred forty-five. 1: Note the hyphen (or the minus sign) in "thirty-four" above. Technically, it's correct to hyphenate all compound numbers from twenty-one (21) through ninety-nine (99). 2: In American English, when writing out natural numbers of three or more digits, the word "and" is not used after "hundred" or "thousand". So it is "one hundred twenty-three" and not "one hundred and twenty-three", though you may hear a lot of people using the last, informally. In British English, the word "and" is used after "hundred" or "thousand" in numbers of three or more digits. 3. Do not use commas when writing out numbers above 999: so it is "one thousand two hundred thirty-four" and not "one thousand, two hundred thirty-four". 4. For clarity, use commas when writing figures of four or more digits: 1,234, 43,290,120, etc. In other countries a point is used to group digits by 3 and a comma to separate the decimals, ex: 1.234,55, 43.290.120,84. In some other countries a space is used to group digits by 3, ex: 1 234, 43 290 120. Spell out all numbers beginning a sentence, "Forty years ago today,..." Not "40 years ago today,...". The Chicago Manual of Style calls for the numbers zero through one hundred to be written out - this would include forms like "one hundred million". Using words to write short numbers makes your writing look clean and classy. In handwriting, words are easy to read and hard to mistake for each other. Writing longer numbers as words isn't as useful, but it's good practice while you're learning. Otherwise, clarity should matter, for example when two numbers are used in a row allways spell one out: "They needed five 2-foot copper pipes to finish the job. There were 15 six-foot tall men on the basketball team roster.". Be consistent within a sentence, phrase... Do not write "... one million people..." and "... 1,000,000 cars..."; stick to one or another, not both.
A meridian is a measurement of longitude that extends from the north pole to the south pole and locates geographic points via their east/west location from Greenwich, England (the prime meridian). Whereas, a parallel is measure of latitude that locates geographic points based upon the degree to which they are North or South of the Equator, or middle line. Work Step by Step Longitude and Latitude are imaginary lines on the Earth's angular surface that are used to locate points. Longitude lines are zeroed at the Royal Observatory in Greenwich, England and extend east and west 180 degrees. Lines of latitude are zeroed at the Equator which is the imaginary middle, or waist, of the earth. Longitude lines are called meridians whereas lines of latitude are called parallels as they do not intersect.
Understanding Tsunami Sources from Surveyed Tsunami Heights and Sediment Deposits Tsunami deposits contain a record of past tsunamis. As tsunami deposits are formed by tsunami flows carrying both offshore and near-shore sediments, the characteristics of tsunami deposits are indicative of the characteristics of tsunami flows. Two research tasks are conducted here by EOS: understanding tsunami source and near-shore tsunami flows from sediment deposits in coastal caves, and understanding tsunami sources using the spatial distribution of tsunami wave energy derived from post-tsunami surveys. Numerical models are first verified by carefully designed laboratory tests and then employed to full-scale problems, including the newly discovered tsunami deposits inside a coastal cave in Aceh province. Co-Investigators: Huang Zhenhua (University of Manoa at Hawaii, USA), Edmund Lo (School of Civil and Environmental Engineering, NTU), Tso-Ren Wu (National Central University, Taiwan) - The 2010 Mw 7.8 Mentawai earthquake: Very shallow source of a rare tsunami earthquake determined from tsunami field survey and near-field GPS data. Journal of Geophysical Research. (2012). - Modeling the change of beach profile under tsunami waves: a comparison of selected sediment transport models. Journal of Earthquake and Tsunami. (2012). - Tsunami-induced coastal change: scenario studies for Painan, West Sumatra, Indonesia. Earth, Planets and Space. 64, 799-816. (2012).
Accidents are sometimes created by ourselves due to our careless activities. If a person is driving basic things should be done first. Among drivers and front-seat passengers, seat belts reduce the risk of death by 45% and cut the risk of serious injury by 50%. There are many ways by which safety belts of cars help in preventing accidents. They are as follows. - It helps to prevent injury in the case of a car crash by decreasing the velocity of a body as it undergoes a sudden reduction in speed. - A seatbelt expands the stopping force required to decelerate the rider across their body. This stops the body from hitting the steering column or windshield of a high-speed car, which could easily result in injury or even death. - The belt is designed to apply most of the stopping force required to the rib cage and pelvis both of which are relatively robust.
Solving Two Linear Equations Solving by Substitution To solve a system of linear equations without graphing, you can use the substitution method. This method works by solving one of the linear equations for one of the variables, then substituting this value for the same variable in the other linear equation and solving for the other variable. It does not matter which equation you choose first, or which variable you solve for first. Let’s use substitution to solve this system of two linear equations: First, choose one of the equations and solve it for either x or y. We can start with the second equation and solve for y, since it doesn’t have a coefficient: Now we have a value, 24 – 4x, for y. We substitute this value of y into the first equation to obtain: We now have one equation and one variable. We can solve for x. If x happened to be all we needed, we’re done. But if we also need y, we’ll plug the value of x back into either original equation. Say we choose the second equation: The solution to this system of linear equations is x = 5, y = 4. This can also be written as (x, y) = (5, 4). Solving by Combination The basic principle of solving by combination is to manipulate two equations so that, when the equations are added together, one of the variables cancels out. Since one of the variables cancels, this method is sometimes called the elimination method. Let’s use combination to solve this system of two equations: This system of equations is suited for combination, because there is already a 2x in both equations. Therefore, if we subtract equation (1) from equation (2) – or, equivalently, multiply equation (1) by -1 and add the two equations – we have a single equation with y: Dividing both sides, we find that y = -4/3. We can then plug y back into either original equation to get the value of x, as we did when solving by substitution. We can still solve by combination even if the variables aren’t lined up so nicely. For example, we can start over and solve the system of equations by making the y‘s cancel, rather than the x‘s. To do that, we can multiply the first equation (1) by the number 2 on both sides: Now subtracting (2) from that result gives us: Solving, we find x = 7. To finish the job, we substitute x = 7 into either of the original equations. If we plug x = 7 into (1), we get: Subtracting 14 from both sides, we get And dividing by 3, we find that So the solution to the two equations (1) and (2) is: Most people prefer the substitution method to the combination method. However, the combination method will prove much faster on certain questions, so if you don’t consider using it, you are likely to lose time or a correct answer on the Quant section. Furthermore, you want to be comfortable with the concept that equations can be added, since a given equation is after all equal on both sides, since that fact can be useful even when you are not solving a system of linear equations by the combination method. Ernesto and His Wife’s Ages: Men and Women Assigned to Projects: A Difference and a Total:
Jane Greaves from Cardiff University in the U.K. and several co-authors, including some from MIT, just announced in an exciting new paper the discovery of phosphine gas in the clouds of Venus. This gas is almost exclusively associated with life here on Earth has been detected in the clouds surrounding Venus. On Earth, bacteria produce the flammable, foul, and toxic gas phosphine after taking up phosphate from minerals or biological material and adding hydrogen. Any terrestrial organisms on Venus would look very different from those on our planet, but nonetheless could be the source of phosphine in the atmosphere. Phosphine is a chemical compound made up of one atom of phosphorus and three atoms of hydrogen. It is quite prevalent in the atmospheres on the gas giants Venus and Saturn, both of which are rich in hydrogen. On Earth, where the atmosphere leans more toward oxygen compounds, it’s much shorter-lived, and the same ought to be true on Venus. “This was an experiment made out of pure curiosity, really – taking advantage of JCMT’s powerful technology, and thinking about future instruments,” “I thought we’d just be able to rule out extreme scenarios, like the clouds being stuffed full of organisms. When we got the first hints of phosphine in Venus’ spectrum, it was a shock!” – Professor Jane Greaves of Cardiff University The researchers used the James Clerk Maxwell Telescope in Hawaii in 2017 and the Atacama Large Millimeter/submillimeter Array in 2019 to study Venus. Their data revealed a spectral signature unique to traces of phosphine in the planet’s atmosphere. The scientists estimated 20 parts-per-billion of the gas is present in Venus’ clouds. While the researchers are confident that it is phosphine they have detected in Venus’s clouds, they acknowledge that confirming the presence of life needs a lot more work. The high clouds of Venus are mostly made up of sulfuric acid, and it is unclear how microbes could survive this extreme level of acidity. The research team considered surface sources like volcanoes, lightning, delivery via micrometeorites, or chemical processes occurring in the clouds as potential causes, but they haven’t been able to determine how the phosphine was produced. This isn’t the first time phosphine has been detected, the Soviet Union’s Vega probes detected a phosphorus-containing chemical in the local clouds in the 1980s, although their instruments weren’t sophisticated enough to make a precise identification. Besides phosphine, other gasses have been detected previously indicating the existence of life; in the analysis of mission data from the Venera, Pioneer Venus, and Magellan missions, it was discovered that carbonyl sulfide, hydrogen sulfide, and sulfur dioxide were present together in the upper atmosphere. Venera also detected large amounts of toxic chlorine just below the Venusian cloud cover. Carbonyl sulfide is difficult to produce inorganically, but it can be produced by volcanism. Venus is an unusual planet that scientists are still trying to understand. It’s our closest planetary neighbor, but it spins backward compared to other planets. The planet’s thick atmosphere helps to trap heat, and its surface is hot enough to melt lead. Above its hot surface, which is 500 degrees Celsius, the upper cloud deck that’s 53 to 60 kilometers above the planet’s surface is much more temperate. But Venus’ clouds are very acidic, which should quickly destroy phosphine. It’s plausible that if liquid water once existed on the surface of Venus before the runaway greenhouse effect heated the planet, microbial life may have formed on Venus. Assuming the process that delivered water to Earth was common to all the planets near the habitable zone, it has been estimated that liquid water could have existed on its surface for up to 600 million years during and shortly after the Late Heavy Bombardment, which could be enough time for simple life to form. Recent studies from September of 2019 concluded that Venus may have had surface water and a habitable condition for around 3 billion years and may have been in this condition until 700 to 750 million years ago, if correct this would give life an ample amount of time to evolve. The research team will continue its search for the origin of the phosphine gas detected on Venus, as well as look for other unexpected gases in its atmosphere. Future observations could reveal the source of phosphine, as well as modeling to show why the gas is present in the atmosphere. Life on Venus? The discovery of phosphine, a byproduct of anaerobic biology, is the most significant development yet in building the case for life off Earth. About 10 years ago NASA discovered microbial life at 120,000ft in Earth’s upper atmosphere. It’s time to prioritize Venus. https://t.co/hm8TOEQ9es — Jim Bridenstine (@JimBridenstine) September 14, 2020 A future potential mission to the planet could sample the clouds and surface may also shed light on the source and Jim Bridenstine, the Administrator of the National Aeronautics and Space Administration (NASA), has now called for the immediate prioritization of studying Venus. Greaves, J.S., Richards, A.M.S., Bains, W. et al. Phosphine gas in the cloud decks of Venus. Nat Astron (2020). https://doi.org/10.1038/s41550-020-1174-4
Military footwear can be traced back over thousands of years, even as far back as the Roman Empire, and just like humans, the combat boot has evolved through generations of change and adaptation. Arguably one of the most important pieces of equipment or gear anyone in a combat situation may possess, the combat boot has come a long way from its humble beginnings. Several important military traditions were given birth to during the historic break from England in 1770's. The U.S. was still young, and its military was tiny compared to England’s oppressive command. Smaller militias lent aid to the cause from all across the original colonies, most of which had their own distinct colours and apparel, alluding to the different military divisions we know today. The typical dress worn would be - a hunting shirt, breeches, leggings, wool jacket, hat, and whatever footwear was available. Since raw materials were expensive, and taxes high, many soldiers, and even civilians, were forced to improvise with their footwear. In the colder colonies, where shoes were necessary to fight against frostbite and hypothermia, ground troops used whatever materials they had on hand. Scraps of cloth or raw animal hide were popular choices, but on occasion blankets tied to the feet would prove better than going barefoot into battle. Cavalry, ranking officers, and those that could afford them typically wore Hessian boots. Hessian boots originated in Germany, and were knee high with a short heel, tailored for riding on horseback. The boots typically had tassels on the front, and were later cut lower in the back to help with manoeuvrability white still offering protection for the knee. The boots were styled for a close fit and worn with knee high breeches. Due to the tightness of this boot, a boot hook was often necessary to properly put the boots on, which proved a lengthy process. Standardised boots were hard to come by during the 19th century, and much of the military still wore whatever shoes they were able to afford. Infantry units wore calf high riding boots in a style similar to the Hessian Boot. Trooper boots that went up past the thigh offered the most protection, but were expensive and impractical for ground units on long marches. The beginning of government issued boots came about in the War of 1812. The War Department ordered as many pairs of ankle high boots that were available to the at the time, and outfitted the soldiers that would need them the most. The boots were typically sewn on straight lasts, a type of shoe mold that made each shoe completely symmetrical. Until they were properly broken in the boots proved uncomfortable, often leaving blisters. Sometimes called Brogan boots, they were usually made of calfskin or patent leather. One of the first revolutions in military footwear came about in 1837 when a 'pegging' machine was invented, this made for the faster production of cheap boots and booties. The pegs, usually small pieces of wood or metal, were used to hold the shape of the boot, but deteriorated much faster than the hand-sewn method. By the time the American Civil War came, the government reverted back to the original design of hand sewn boots. The price for pegged boots decreased to just over $1.25, while hand sewn Cavalry boots were often purchased at three times that price. The idea of soles became more popular during this time, and most were hand sewn. The Hessian boot was replaced by a Wellington style M1851 Artillery Driver’s boot, which were outfitted to cavalry and artillery drivers. The heel was slightly shorter than the Hessian boot, and the toe was more squared. In an effort to improve durability, brass tacks were inserted in the sole. Union soldiers had access to better quality materials, while their Confederate counterparts suffered with boots of sub-par quality. The soldiers fighting for the North were first issued hand-sewn boots, and pegged boots only as a last resort. Most boots worn by the Confederate Army were pegged, nailed, or riveted, and fashioned in a style similar to that of the British Military at the time. Some of the greedier manufacturers used poor materials in an effort to take advantage of the civil turmoil. Rumours of cardboard being used circulated, and some even sharpened the pegs or brass tacks in the soles to make them wear out more quickly. With the evolution of explosives and artillery like grenades and machine guns, trench-style warfare became more common during the early and mid-1900’s. Given the wet, cold, and unsanitary nature of the trenches, military gear and equipment, boots in particular, had to hold up against extreme conditions. The modern combat boot we know today began to take shape in WWI. Most boots made in the early 1900’s had a distinct left and right, as opposed to previous versions with each shoe being virtually interchangeable. In the early years of WWI, the Russet Marching shoe was the most widely accepted boot worn in the military. It was highly polishable and made of machine-sewn calfskin. The inner lining was made from feathers. While this boot proved far more advanced than previous issue boots, it did not hold up well on French terrain. A later version, modelled with specifications from France and Belgium, was made from vegetable retanned cow hide, and featured both a full and half-sole. Rows of hobnails and iron plates were affixed to the heel of every boot. The heel and sole were attached with screws, nails, and stitching, and despite their superior construction, still did not hold up against the rough conditions. In 1917 the Trench Boot was born, offering vast improvements from the Russet Marching Shoe. While it offered better protection against the wet conditions, it was not waterproof, which lead to various diseases like trench foot. The look and styling was similar to the marching shoe, but the insole was composed of new materials like; canvas, cork, and cement. Due to the rigid nature of the soles, the boots were highly uncomfortable until broken in and the natural movement of the foot caused excessive damage. The Trench Boot offered little in the way of insulation, and many soldiers complained of cold feet. It became common practice to wear multiple pairs of socks, and order boots a few sizes above what one would normally wear. Several different variations were produced in an attempt to fix the early issues of waterproofing. A year later, the 1918 Trench Boot, or “Perishing Boot” was released, offering improvements over earlier versions. Better quality materials, such as heavier leather and stronger canvas were used in an attempt to improve the longevity of use. The boot’s soles were attached in a similar fashion with screws and nails, but held three soles in total, as opposed to the previous issue’s one and a half. The metals used in hobnailing conducted the cold, and the thicker sole helped eliminate that problem. Iron toe cleats were added to the toe of each boot, offering extra protection, but making the boots bulkier. During the initial stages of WWII, the standard issue US military boot was the M-42 'Service Shoe', an all leather toe cap boot with a two piece stitched sole, this style was eventually replaced by the rough-out boot, probably the most recognisable boot of the war. After the Normandy invasion the American military started updating their equipment, one of the items they replaced was the canvas gaiters and rough out ankle boot. They did this by basically making the rough out boot higher by adding a double buckle leather gaiter onto the top of the boot. The M-43 buckle boots where in general issue by the winter of 1944/45 and where worn by all branches of service including the Paratroopers, Armoured and Infantry in the Battle of the Bulge. They were titled 'Boots, Combat Service', and nicknamed “Double Buckle Boots.” While previous military boots like the Trench Boots only had laces, these boots went back to the older buckle style. These boots were made from synthesised rubber and other recycled materials, and had a leather fold-over cuff with two buckles. With only a single sole, they proved uncomfortable, but much easier to move around in than the Trench Boot. In times of shortage, some units, particularly Rangers, were issued Paratrooper Jump boots, which were quite distinct from all other boots at the time. The Paratrooper boots were highly sought after by regular troops who often purloined or "acquired" via alternative means. Previous issue boots with minimal variation were used during the Korean War, but were not fit for purpose in Vietnam. Vastly different climates and temperatures rapidly deteriorated the soles and integrity of the Combat Service Boot, which was eventually replaced by the Jungle Boot. The general idea behind Jungle Boots first came about in Panama and the latter part of WWII for Soldiers serving in the Pacific. While these boots consisted mainly of rubber and nylon, they did not hold up well. The government issued boot was typically the traditional all leather combat boot, or the Jungle Boot. The U.S. Department of War tasked the company Wellco with solving the troops various issues with moisture, insects, and sand. Wellco created and sold a prototype which held up better than their previous counterparts. The boot was composed of a black leather sole and canvas upper with an attached tongue, which helped to keep out insects and debris. It built upon earlier generations by using rubber and a canvas with a cotton blend, but added in the durability of leather. Water drains were added to help keep the feet dry and prevent bacteria from growing. After in-combat testing and feedback, the Jungle Boot was adapted to better suit the soldiers’ needs. The canvas blend was replaced with a nylon canvas that dried faster. Steel plates were affixed to the soles of the boot, to protect the feet against punji stakes used to pierce the foot. Additional nylon webbing reinforced the boots’ uppers, increasing the durability. While these boots did not last as long as all leather combat boots, they did offer a vast improvement over the earlier versions. Soldiers were known to carry multiple sets of boots, and often wore their jungle boots only when absolutely necessary. These high tech jungle boots signalled the dawn of a new era, over the next 20 years combat boots would evolve into the lightweight protective boots worn today. While impossible to predict the future, it’s a safe bet that combat boots will continue to grow and evolve alongside those that wear them. From the Roman Empire to the sands of present day Iraq, it’s easy to forget that something we see regularly can have such a rich history. With huge leaps in all aspects of technology, who’s to say which direction the design and features of future boots will take.
Michelson and Morley Case Western Reserve University, Cleveland, OH Edward Morley, 1887 Albert Michelson (1852-1931) On November 14, 2005, Case Western Reserve University in Cleveland held a celebration both of the World Year of Physics and of the centennial of their physics building. As part of the festivities, the American Physical Society presented a plaque commemorating CWRU as a historic physics site, in honor of the Michelson-Morley experiment that took place there is 1887. In the 19th century, physicists generally believed that just as water waves must have a medium to move across (water), and audible sound waves require a medium to move through (air), so also light waves require a medium, which was called the "luminiferous” (i.e. light-bearing) “ether”. The Michelson-Morley experiment became what might be regarded as the most famous failed experiment to date and is generally considered to be the first strong evidence against the existence of the luminiferous ether. Michelson was awarded the Nobel Prize in 1907, becoming the first American to win the Nobel Prize in Physics. Physicists had calculated that, as the Earth moved in its orbit around the sun, the flow of the ether across the Earth’s surface could produce a detectable "ether wind". Unless for some reason the ether were always stationary with respect to the Earth, the speed of a beam of light emitted from a source on Earth would depend on the magnitude of the ether wind and on the direction of the beam with respect to it. The idea of the experiment was to measure the speed of light in different directions in order to measure the speed of the ether relative to Earth, thus establishing its existence. To measure the velocity of the Earth through the ether by measuring how the light changed, Albert Michelson (1852-1931) designed a device known now as an interferometer. It sent the beam from a single source of light through a half-silvered mirror that was used to split it into two beams traveling at right angles to one another. After leaving the splitter, the beams traveled out to the ends of long arms where they were reflected back to the middle by small mirrors. They then recombined on the far side of the splitter in an eyepiece, producing a pattern of constructive and destructive interference based on the length of the arms. Any slight change in the amount of time the beams spent in transit would then be observed as a shift in the positions of the interference fringes. Michelson had done a preliminary version of the experiment in 1881. After accepting a position at Case School of Applied Science in Cleveland, he began a collaboration with Edward Morley, a professor of chemistry at neighboring Western Reserve College. The apparatus they built floated in a trough of mercury, which allowed it to be rotated slowly. As it rotated, according to the ether theory the speed of light in each of the two perpendicular arms would change, causing a shift in the interference pattern. The results of the experiment indicated a shift consistent with zero, and certainly less than a twentieth of the shift expected if the Earth’s velocity in orbit around the sun was the same as its velocity through the ether. Other versions of the experiment were carried out with increasing sophistication, but the Michelson-Morley measurements were the first with sufficient accuracy to challenge the existence of the ether. The explanation of their null result awaited the insights provided by Einstein’s theory of special relativity in 1905.
ddttrh.info - Short and Long Run Phillips Curve What is the difference between the short and the long run Phillips curve. Explains it in a simple way. Followed by practice exercises to complete one. The Phillips curve is a single-equation econometric model, named after William Phillips, The long-run Phillips curve is now seen as a vertical line at the natural rate of unemployment, where the . This is because in the short run, there is generally an inverse relationship between inflation and the unemployment rate; as. Phillips curve model** | a graphical model showing the relationship between unemployment and inflation using the short-run Phillips curve and the long-run. This is true, but it is evident only in the short run. It does not hold true in the long run. This is so because it is only in the short run that expected ex-ante inflation varies from actual ex-post inflation. This is not true over the long run. If expected inflation values turn out to be equal to the actual values, then the Phillips curve relationship would not exist even in the short run. But in reality in the short run and only in the short run the two expected and actual inflation do not match. A standard example of this mismatch and hence the existence of the short run Phillips curve SRPC is the process of future wage contract negotiations, as for example the United Auto Workers UAW contracts. Lecture Notes -- The Phillips Curve These future wage contracts are indexed to inflation, because both parties employers and employees are interested in real wages, not nominal. Our starting point is a new UAW wage contract negotiation. But in reality this is a rare occurrence. In real life most of the time expected ex-ante and actual ex-post values do not match. Let us see what would happen in that case. Lesson summary: the Phillips curve (article) | Khan Academy Print View Phillips curve - short-run As we have seen, it is very important for government to achieve its objectives. But these economic objectives are closely related and a movement in one can cause an opposite movement in another. Such movements need not be beneficial to the economy. For example, too large a balance of payments deficit might cause a fall in the exchange rate and impact on the rate of inflation. So, it is very much a 'balancing act' and sometimes it can get blown off course by events beyond the control of a particular government, e. Unemployment and inflation trade-off One of the key trade-offs that a government always faces is that between unemployment and inflation. Low unemployment may mean high inflation. This is because the high level of demand in the economy that helps give everyone a job may also be too much for the capacity of firms to cope with and they may respond to rising demand by increasing prices. This, as we have already seen, is called demand-pull inflation. This trade-off was formalised in research done by Professor A. Phillips, and the curve he derived from his empirical study of unemployment and inflation has since become known as the Phillips curve. The relationship was based on observations he made of unemployment and changes in wage levels from to
When you are asked to analyse the structure of a text as part of your English Language course, you are being asked how the writer has chosen to put that text together. Remember that when writers write stories or descriptions, they choose what order they will tell the reader certain things and at what pace. Usually, they have made these choices to create a particular effect on the reader. For example, if I am writing a story about a robbery, how should I begin? Should I start by describing the house itself so my readers can picture the scene clearly? Or would I prefer to begin with the robber himself, zooming in on his old clothes or explaining his motive so that perhaps the reader will feel more sympathetic towards him? Or should I introduce the owner of the house as he hears a crash downstairs, so that the reader is thrown straight into the action (a technique called beginning ‘in medias res’)? At the end of the same story, I might want to finally reveal the identity of the robber. How should I build tension before the shock? Should I use lots of one-sentence paragraphs to slow the reader down? Or should I use lots of dialogue so they can hear the characters’ opinions? Or should I use lots of punctuation like question marks and ellipses to force them to pause and consider clues? These are the types of decisions that writers make as they put together a story. It’s your job to work out why they have made each decision. You can analyse structure at various different levels: When you analyse the structure of the whole text, you can discuss the following elements: - How the writer has chosen to open and close their text. - How the focus shifts from paragraph to paragraph as the text progresses. - What overall structure the narrative has (linear, non-linear, or cyclical). - What narrative perspective the writer has chosen (first or third person). When you analyse the structure of individual paragraphs, you can discuss the following elements: - How the paragraph opens (the content of its topic sentence). - How the paragraph closes (the content of its concluding sentence). - The length of the paragraph (whether it contains one sentence or many sentences; lots of complex sentences or lots of simple sentences). - Its cohesion with surrounding paragraphs (how it flows in the text). When you analyse the structure of sentences, you can discuss the following elements: - The sentence length (if it is particularly short or particularly long). - The first or last word of the sentence (if they are noticeable for a particular reason). - Repetition of words, word classes, or structures within the sentence. - The sentence type (declarative, interrogative, exclamatory, or imperative). - The sentence form (simple, compound, or complex). When you analyse the use of punctuation, you can discuss the following elements: - The types of punctuation used. - Repeated punctuation and the possible reason for this. - The way that the punctuation breaks up sentences or paragraphs. - The tension created through placement of punctuation. For help with any of the techniques discussed in this blog, sign up for a Get My Grades subscription today – we are currently offering a 7-day free trial for new users! Our platform has hundreds of pages devoted to English Language to help you prepare for difficult tasks such as analysing structure, with thousands of questions to help you practise. Check out our example content here and don’t forget to ask your parent or guardian before signing up. Note: Some of the information about analysing structure above may not be consistent across all exam boards. These are elements that teachers at Get My Grades consider to be relevant to structural analysis, but advice from exam boards may vary (especially around the issue of sentence types and forms).
What are property rights? This concept is one that is often difficult to understand and, therefore, can vary in definition. The most complete property rights systems have four underlying qualities: the right to possess a resource or asset, the right to determine its use, the right to sell and receive profits from it or its outputs, and the ability to exclude others from using it. In short, property rights are legal rights an owner has entitling them to the use, benefit, sale, and exclusion of others for a given resource. An example would be a farmers market, where it is pretty clear who owns what. The farmer owns the tomatoes until a shopper pays for them. The shopper understands they are not allowed to simply take the tomatoes, just as the vendor in the next stall recognizes that they are not his tomatoes to sell. When the shopper purchases the tomatoes and walks away, the tomatoes then belong to the shopper; the farmer knows that he cannot just take them back. This is an illustration of property rights—common understandings backed with formal laws that allow the farmer to exert exclusive ownership of the tomatoes. Property right arrangements can stretch beyond conventional notions of a single owner possessing a physical object for an indefinite amount of time. It is not only possible, but fairly common, for groups and governments to own resources. Property rights can also be held temporarily and can also be held over something intangible, like an idea. The most important consideration is not ‘who’ has property rights to the specific resource, but that someone, or some organization, does in fact have those rights. These rights can also be held in common, as with a tribe or village. A clear definition, consistent enforcement, and easy divestment of property rights are paramount to their effectiveness. Clearly defining a resource and delineating ownership is required for any legitimate transaction to be made. Consistently enforcing or defending ones right to the resource is essential to give ownership meaning. Finally, being able to divest (sell off) one’s property is necessary if resources are to be used in the most efficient manner. At their most basic level, property rights serve as the market’s organizing force, creating rights to resources and compelling resource owners to fully consider the opportunity cost of any actions. A market’s relation to property rights may be thought of as serving a similar purpose to the rules of a football game. If both teams play by standard, well-known rules, there will be less stoppage for penalties and/or explanations. However, the rules are less likely to be obeyed unless a referee—think the legal system—enforces them. A strictly and consistently enforced game will raise fewer disagreements and be significantly more enjoyable to participate in than a lax one. With the rules of the game clearly established, the players are free to focus on playing the game of football. In the market, when property rights are present, entrepreneurs and business people are able to concentrate on being productive, i.e. to develop new products, create new services, and plan for the future. The issue of property rights can become particularly significant when dealing with environmental issues because many natural resources do not typically lend themselves to clearly defined and easily enforceable property rights. Air and water are good examples in which property rights are not usually well-defined. In these cases, pollution costs may be dispersed across all that use the air and water; a polluter does not pay any more than anyone else for additional pollution emissions. Since the costs the polluter pays is a fraction of the total cost, they have little reason to emit less. Any situation where participants do not bear the full cost of their actions, the extra cost is known as an externality. The establishment of property rights can help to internalize externalities. However, lawsuits are also possible where damages occur. For example, the Valdez oil spill resulted in the polluter paying both compensation for damages and punitive fees. Similarly, when property rights are not defined, it can lead to a tragedy of the commons. This occurs when resources are degraded because individuals have the incentive to utilize the maximum amount before someone else does. An example of this can be illustrated by saltwater fish stocks, the Atlantic bluefin tuna in particular. Since no one owns the fish, fishermen take until the population is nearly depleted; putting a fish back—even a small one—simply means someone else will catch it. In many cases of tragedies of the commons, environmental stewardship is often not practiced because it is difficult to exclude others from benefiting from the responsible practices. Yet, New Zealand is an example where a system of fishing property rights has been developed in order to reduce externalities. Solutions to externality or tragedy of the commons problems usually involve the creation of uniform regulation or the establishment of property rights. An argument for uniform regulation, which is often done through inefficient and expensive government intervention, is based on the idea that the marketplace cannot handle externalities or prevent a tragedy of the commons. However, by recognizing the role property rights plays in determining the effectiveness of markets, we can begin to think that—instead of setting a uniform standard or regulation—we should be establishing property rights in order to help create an effective market. When faced with true costs, polluters and others that create negative externalities through the use of their resources will have a stronger incentive to change their behavior. For example, if a fisherman can devise a way to establish property rights in a section of the ocean, maybe by establishing a fish farm, he then has an incentive to manage his fish population for the greatest personal gain. Fundamentally, when one stands to profit, the choice will always be to use the property to its highest overall value. When the ownership of a resource is clearly defined and there is a defined responsibility for both capturing the benefits and paying full costs, there is little inefficiency in the market. Behavior that is seen as wasteful or inefficient will cease as owners begin to bear full responsibility and cost for their actions. The incentives provided by the establishment of property rights can help to ensure that owners engage in actions that produce positive net benefits for themselves and for society, because those that discover a higher valued or more efficient use of a resource stands to gain further from the innovation. In summary, property rights allow holders to use, profit, sell, and exclude from a resource. These rights are often thought of as the foundation for efficient markets because they establish clearly defined and enforceable rules. They are particularly important for issues related to the environment because, once established, they can be a more practical alternative to government regulation. This article by Armen Alchian, located within the Concise Encyclopedia of Economics, illustrates the importance, structure and characteristics of property rights. Property Rights and Environmental Policy: A New Zealand Perspective Though specific to environmental issues in New Zealand, this paper provides a good summary of the evolution of property rights and the principals behind a successful property rights structure. On the Commons A project of the Tomales Bay Institute, this site is dedicated to exploring ideas and action about the commons—or public goods—can be fairly and sustainably managed. Property Rights v. Environmental Ruin This 1994 essay by David Theroux from The Independent Institute, a public policy group, argues for the need of property rights when trying to address environmental problems. For the Classroom Lesson Six: Externalities, Property Rights and Pollution A resource from the Foundation for Teaching Economics, the lesson ties together various economic concepts, including externalities, the ‘commons,’ costs/benefits, and property rights. [Grades 9-Undergraduate] The Mystery of is it Mine or Ours? This lesson, from the National Council on Economic Education, confronts the issue of public versus private goods and services, and the role of government. [Grades 6-8] Alchian, Armen A., ?Property Rights,? The Concise Encyclopedia of Economics from the Library of Economics and Liberty. Gwartney, James D., Macpherson, David A., Sobel, Russell S., and Richard L. Stroup. Macroeconomics, Private and Public Choice: 10th edition. Mason, Ohio; South-Western College Publishing, 2003. pg 32 -38. Meiners, Roger E. and Bruce Yandle. ?The Common Law: How it Protects the Environment,? PERC Policy Series, Issue Number PS-13. May 1998. Nickler, Patrick A., ?A Tragedy of the Commons in Coastal Fisheries,? Boston College Environmental Affairs Law Review. Spring 1999.
“Space is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.” Douglas Adams, The Hitchhikers Guide to the Galaxy We all know the universe is large, very large, but is it possible to really comprehend just how large it really is? Sit down, take a deep breath, and we can give it a go. In my previous scale article, we considered the sizes of stars, and finished by imagining the sun being the size of an orange. On this scale, the nearest star to the sun, also the size of an orange, would be 2,300 kilometres away. Even through stars can be immense on human scales, they are dwarfed by the distances between them. Let’s continue our journey outwards and consider larger distances in the universe. The first stop is our cosmic home, the Milky Way galaxy. From our vantage point, buried deep within, the Milky Way appears as a broad band of stars encircling the sky. On a clear night, away from the lights of civilisation, we may be able to pick out a few thousand individual stars as mere points of light. The smooth swathe of light that accompanies them, however, is the combined light of many more distant stars. How many? It turns out the Milky Way is home to more than 200 billion stars, lots of stars like the sun, a few spectacular giants, and many, many faint dwarfs. To get a handle on the size of the Milky Way, let’s pretend the distance across it is 3,000km, roughly the distance between Sydney and Perth. On this scale, the separation between the sun and its nearest neighbour would be about 100 metres, whereas the diameter of the sun itself would be about a tenth the thickness of a human hair. Other than a bit of tenuous gas, there’s a lot of empty space in the Milky Way. For much of human history, we have prided ourselves on being at the centre of the universe, but as Douglas Adams pointed out, we live in the “unfashionable end of the Western Spiral arm of the Galaxy”. If the small town of Ceduna in South Australia, sitting roughly midway between Sydney and Perth, was the centre of the Milky Way, our sun would be orbiting 850km away, somewhere beyond Mildura in north-western Victoria (and, no, I’m not suggesting Mildura is unfashionable!) So the Milky Way is huge, and light, traveling at 300,000 kilometres a second, takes 100,000 years to cross from side to side. But we know that we share the universe with many other galaxies, one of the nearest being a sister galaxy to our own, the large spiral galaxy in Andromeda. I am writing this in the dome of the 4-metre Mayall Telescope at Kitt Peak in Arizona, during a night where we are observing the Andromeda galaxy. As the light falls on our electronic detectors, it’s always startling to think it has taken more than two million years to travel from there to here, and we are seeing Andromeda as it was before our ancestors, homo ergaster, walked Earth. Andromeda and the Milky Way inhabit a small patch of the universe known as the Local Group. While these two galaxies are by far the largest members, there are another 70 galaxies that are considerably smaller. To think about the scale of the Local Group, imagine that the Milky Way is a large dinner plate, with a diameter of roughly 25cm. With this, the Local Group would occupy the volume of a five storey building, one that is as wide and deep as it is tall, and if the Milky Way sits on a table on the second floor, Andromeda would be a plate on a table on the fourth floor. Spread throughout the rest of the building would be the 70 other Local Group galaxies. While some will be scattered almost randomly, many will be closer to the larger galaxies, but as dwarfs, most would be only a centimetre or less in size. While dwarfs represent the smallest of galaxies, we know we share the universe with some absolute galactic monsters. The largest yet discovered goes by the unassuming name of IC 1101, located a billion light years away (a single light year being equivalent to slightly less than 10 trillion kilometres) from the Milky Way. It truly dwarfs the Milky Way, containing more than a trillion stars, and would easily fill our five storey building. So, we approach the ultimate distance scale for astronomers, the size of observable universe. This is the volume from which we can have received light in the 13.7 billion year history since the Big Bang. Due to the expansion of the universe, the most distant objects are a mind-boggling 46 billion light years away from us. Can we hope to put this on some sort of understandable scale? The answer is yes! Let’s think of the entire Milky Way as a 10c coin, roughly one centimetre across. Andromeda would be another 10c coin just quarter of a metre away, and the Local Group could easily be held in your arms. The edge of the Observable Universe would be 5km away, and the universe would be awash with 300 billion large galaxies, such as our own Milky Way, living in groups and clusters, accompanied by an estimated ten trillion dwarf galaxies. This is a total of 30 billion trillion individual stars. And yet most of the universe is almost completely empty. At the edge of the Observable Universe, we have almost reached the end of our journey. We are left with the question of what is beyond the Observable Universe? Just how much more is out there? If we combine all of our observations of the universe, with our theoretical understanding of just how it works, we are left with a somewhat uncomfortable fact. The universe appears to be infinite in all directions, containing a infinite number of galaxies and stars. And that really is a lot to think about. Read part one of the Cosmic Scale Series by Geraint Lewis – They might be giants: a mind-blowing sense of stellar scale.
Getting the "Wright" Pitch To gather this data, researchers will place special instruments on various parts of the airplane. Each instrument is designed to gather a specific type of information. For example, some instruments will collect information about how the air flows to the airplane. Other instruments will also measure how fast the air flows or how great the air pressure is at certain points along the aircraft. To gather the information needed by the AIAA pilots, the balance will measure the forces and where those forces are located along the center of gravity. Remember, the center of gravity (CG) is the point where the entire weight of the airplane is considered to be concentrated. On a paper airplane you could find the general location of that point by balancing the paper airplane on your finger. The point at which the airplane balances on your finger would be considered the center of gravity (CG). All three motions (roll, pitch, yaw) pass through this point. All four forces interact on the airplane as it moves about this point. Designing airplanes with static stability is important. (For the purposes of our discussion we are not considering military jet fighters as some form of instability is preferred which make these aircraft more quickly maneuverable when managed by computer and pilot.) Stability is the tendency of an airplane to fly with equilibrium on its flight path. To fly with equilibrium means that the sum of all forces and moments acting on the airplane will equal zero. See the graph below. For example, let's look at an airplane that is flying straight and level. For this airplane to fly with equilibrium in its straight and level flight path, the four forces must be in balance. That means the lift will be equal to the weight and the thrust will be equal to the drag. It also means that there are no moments acting on it. These moments are trying to make the aircraft rotate about the center of gravity either by pitch, roll or yaw. Now let's have the airplane encounter some minor turbulence. This turbulence causes the airplane to nose up or increase its angle of attack. If the airplane reacts to this disturbance by returning itself to its straight and level flight path (without the pilot having to make the adjustments), then the airplane has static stability. Now let's have the airplane encounter some minor turbulence again. This turbulence causes the airplane to nose up or increase its angle of attack. If the airplane holds its new angle of attack after the turbulence has passed, then it is considered to have neutral static stability. We'll return the airplane to its state of equilibrium. Let's have it encounter some more minor turbulence. This turbulence also causes the airplane to nose up. Even after the turbulence has passed, however, the airplane continues to nose up and does not automatically return to its previous flight path without the pilot having to make adjustments in the controls. This airplane is then considered to be "statically unstable". When graphing data related to longitudinal static stability of an airplane, the graphs of airplanes with static stability have a similar slope. Let's take a look at some actual data and graph it. The chart on the next page contains some hypothetical wind tunnel test data on a small airplane. As researchers we will be considering only the columns marked "Alpha (deg)" and "CM". The "Alpha (deg)" column tells us the angle of the nose up or down relative to the airflow. This is the angle of attack. Remember, increasing angle of attack will generally increase the amount of lift. The "CM" column gives us information about the amount of pitching moment being generated by the airplane. A value (number) for CM that is positive means the airplane is pitching its nose up, and a negative value means the aircraft is pitching its nose down. The magnitude of the CM (the numbers themselves) are an indication of how fast the aircraft is pitching. The greater the absolute magnitude of the number, the faster the rotation (or pitching moment) is. The pitching moment was converted into a coefficient in a manner similar to the lift coefficient and drag coefficient. This allows us to consider the pitch test done at other velocities, and for other wind tunnel models or aircraft that are otherwise identical, except for being a different scale.
Herman Melville (1819–91), American writer par excellence, wrote this poem to commemorate those killed at a Civil War battle that took place along Chickamauga Creek in Georgia on September 19–20, 1863. The Confederate Army, under General Braxton Bragg (1817–76) defeated the Union Army, under General William Starke Rosecrans (1819–98). The casualties and losses of those two days of fighting—over 34,000—were higher than any battle of the war, save Gettysburg. Identify and compare the subjects of the two stanzas. What distinguishes them, and what do they have in common? Who are the “brothers who victorious died,” and why will musing on them be an ever-pleasing memory? What does Melville mean by saying, “mischance is honorable too”? Why are those who died in “seeming defeat,” not knowing the conflict’s outcome, also worthy of memory?
Answers 2Add Yours By definition Senecan tragedy was very bloodthirsty and was, in some ways, well designed for Elizabethan audiences. However, most Elizabethan dramatists did not follow Seneca's form of writing; rather, they used a revenge theme, an ending filled with blood and gore, and the otherworldly beings all included in the story. Obviously, Shakespeare used all of these elements in this play. Revenge tragedys contain certain elements; one, a hero bent on avenging an evil deed (Hamlet), often encouraged by the apparition or ghost of a close friend or relative (Hamlet's father). You will also have scenes of death and mutilation, insanity (Hamlet once again), subplays, and the eventual death of the would be hero (a poison blade).
The National Convention in the era after Robespierre’s downfall was significantly more conservative than it had been before and deeply entrenched in the values of the moderate middle class. The change was so drastic that once-powerful groups like the sans-culottes and Jacobins were forced underground, and sans-culottes even became a derisive term in France. Meanwhile, the French economy struggled during the winter of 1794–1795, and hunger became widespread. Although the members of the convention worked diligently to try to establish a new constitution, they faced opposition at every turn. Because many sanctions against the churches had been revoked, the clergy—many of whom were still loyal to the royalty—started to return from exile. Likewise, the Comte de Provence, the younger brother of Louis XVI, declared himself next in line for the throne and, taking the name Louis XVIII, declared to France that royalty would return. (Hopeful French nobles in exile briefly referred to Louis XVI’s young son as “Louis XVII,” but the boy died in prison in June 1795.) On August 22, 1795, the convention was finally able to ratify a new constitution, the Constitution of 1795, which ushered in a period of governmental restructuring. The new legislature would consist of two houses: an upper house, called the Council of Ancients, consisting of 250 members, and a lower house, called the Council of Five Hundred, consisting of 500 members. Fearing influence from the left, the convention decreed that two-thirds of the members of the first new legislature had to have already served on the National Convention between 1792 and 1795. The new constitution also stipulated that the executive body of the new government would be a group of five officers called the Directory. Although the Directory would have no legislative power, it would have the authority to appoint people to fill the other positions within the government, which was a source of considerable power in itself. Annual elections would be held to keep the new government in check. The dilemma facing the new Directory was a daunting one: essentially, it had to rid the scene of Jacobin influence while at the same time prevent royalists from taking advantage of the disarray and reclaiming the throne. The two-thirds rule was implemented for this reason, as an attempt to keep the same composition like that of the original, moderate-run National Convention. In theory, the new government closely resembled that of the United States, with its checks-and-balances system. As it turned out, however, the new government’s priorities became its downfall: rather than address the deteriorating economic situation in the country, the legislature instead focused on keeping progressive members out. Ultimately, paranoia and attempts at overprotection weakened the group. Meanwhile, fortified by the Committee of Public Safety’s conscription drive of 1793, the French army had grown significantly. While the foundation of the Directory was being laid, the army, having successfully defended France against invasion from Prussia and Austria, kept right on going, blazing its way into foreign countries and annexing land. During the period from 1795 to 1799 in particular, the French army was nearly unstoppable. Napoleon Bonaparte, a young Corsican in charge of French forces in Italy and then Egypt, won considerable fame for himself with a series of brilliant victories and also amassed massive reservoirs of wealth and support as he tore through Europe. The Directory encouraged this French war effort across Europe, though less as a democratic crusade against tyranny than as a means of resolving the unemployment crisis in France. A large, victorious French army lowered unemployment within France and guaranteed soldiers a steady paycheck to buy the goods they needed to survive. The Directory hoped that this increase in income would encourage an increase in demand, reinvigorating the French economy. Unfortunately, it was not long before the Directory began to abuse its power. The results of the elections of 1795 were worrisome to the Directory because a number of moderate royalists won. Although these royalists didn’t exactly qualify as counterrevolutionaries, their loyalty to the Directory was nevertheless suspect. Then, in May 1796, a group of Jacobins, led by prominent publisher Gracchus Babeuf, met secretly to plan a coup in the hopes of reinstating the government of the Constitution of 1793. Already troubled by the 1795 election results, the Directory squashed the coup plot, had the conspirators arrested, and had Babeuf guillotined. As the elections of 1797 drew near, the Directory noticed that significant royalist and neo-Jacobin influences were leaking into the republic, which could have terrible implications for the direction of the legislature. On the other hand, the Directory had to obey the Constitution of 1795 and its mandate for annual elections. It therefore allowed the elections to proceed as scheduled. However, on September 4, 1797, after the elections did indeed produce decidedly pro-royal and pro-Jacobin results, three members of the Directory orchestrated an overthrow of the legislature, annulling the election results and removing a majority of the new deputies from their seats. The coup plotters also unseated two members of the Directory itself—former military strategist Lazare Carnot being one of them—and installed two new directors, further ensuring that the government would remain staunch in its moderate stance. This new Directory was powerfully conservative, initiating strong new financial policies and cracking down on radicalism through executions and other means. However, the coup and the Directory’s subsequent abuses of power destroyed all of the government’s credibility and further disillusioned the French populace. In the elections of 1798, the left made gains, feeding on public anger about the coup and the reinstatement of the military draft. The Directory, justifiably fearing the opposition’s gains, once again nullified almost one-third of the election results, ensuring that its own policies would remain strongly in place. Public dissatisfaction was an obvious result, and the next elections would have the lowest turnout of any during the Revolution. Meanwhile, inflation was continuing unchecked, leading the public to wonder whether a royal return to power wouldn’t be more beneficial. Trust and faith in the government neared an all-time low. As the government’s credibility took a turn for the worse, so too did French military fortunes. In 1799, Napoleon’s seemingly unstoppable forward progress ran into a roadblock in Egypt, and France’s army in general faced simultaneous threats from Britain, Austria, Russia, and the Ottoman Empire. Hearing of the bedlam taking place in mainland Europe, as well as within in his own country, Napoleon deserted his men and headed back to France. The failing war efforts amplified the French people’s distrust of the Directory, and large majorities of the French public began calling for peace at home and abroad. In May 1799, the upper house of the legislature, the Council of Five Hundred, elected Emmanuel-Joseph Sieyès—of “What Is the Third Estate?” fame—to the Directory. This election was the result of extensive maneuvering on Sieyès’s part. Sieyès, however, did not want to keep his newfound power for himself but instead intended to use it to protect the French government from future instability and disturbances. Therefore, he enlisted the aid of Napoleon, with whom he began to plan a military coup to topple the very same Directory on which Sieyès himself served. This coup materialized on November 9, 1799, when Napoleon, who had returned to France, overthrew the Directory. The next day, Napoleon dissolved the legislature and instituted himself as first consul, the leader of a military dictatorship. By imposing this state of military rule that would grip France for fifteen years, Napoleon effectively ended the French Revolution. Although it was the Directory that had encouraged the French army’s actions, ultimately, the army’s unprecedented success in its outward expansion actually ended up working against the Directory rather than for it. Being away from home for so long, the respective companies of soldiers—particularly those under the control of Napoleon—formed their own identities and group philosophies. By splitting the spoils of each successful campaign with his own troops, Napoleon earned the steadfast devotion of what amounted to a private army. This loyalty would prove essential to the success of his eventual coup and the years of military rule and expansionism that would follow. Sieyès’s political maneuvering may seem inexplicable at first, as he essentially finagled his way into power in the Directory just so he could use that power to remove himself from it. Though that explanation is an oversimplification, it illuminates Sieyès’s priorities and demonstrates the depth of the revolutionary spirit that prompted him to make such a sacrifice. To Sieyès, it was clear that, at the time, a military rule under the watch of someone such as Napoleon would be far more beneficial to France than the argumentative, corrupt, and generally ineffective system that was in place. Indeed, though Napoleon would lead as a dictator of sorts, he would do so with much more respect for the spirit of liberty and equality than the originators of the French Revolution had pursued.
Swine influenza is also referred to as H1N1 type A influenza. It is respiratory illness of pigs caused by infection with swine influenza A virus. Symptoms are fever, cough, sore throat, runny nose, body pains, headache, chills, and fatigue. Lots of people with swine flu experienced diarrhea and fever or vomiting. Treatment mainly involves antiviral drugs and antibiotics like Zanamivir, Peramivir, and Oseltamivir to treat complications.it will reduce the severity of one's signs, and possibly the risk of troubles. But, flu viruses can acquire resistance to these drugs.
radar Acronym for Radio Detection And Ranging. Radio waves are bounced off an object, and the time at which the echo is received indicates its distance. radial motion Motion along a particular line of sight, which induces apparent changes in the wavelength (or frequency) of radiation received. radiation A way in which energy is transferred from place to place in the form of a wave. Light is a form of electromagnetic radiation. radiation belts Zones or belts of charged particles that are trapped in magnetic fields around the Earth. [More Info] radiation-dominated universe Early epoch in the universe, when the density of radiation in the cosmos exceeded the density of matter. radiation pressure The transfer of momentum carried by electromagnetic radiation to a body that the radiation impinges upon. radio galaxy Type of active galaxy that emits most of its energy in the form of long-wavelength radiation. radio lobe Roundish region of radio-emitting gas, lying well beyond the center of a radio galaxy. radio telescope Large instrument designed to detect radiation from space in radio wavelengths. radioactivity The release of energy by rare, heavy elements when their nuclei decay into lighter nuclei. radius-luminosity-temperature relation A mathematical proportionality, arising from Stefan's Law, which allows astronomers to indirectly determine the radius of a star once its luminosity and temperature are known. reaction wheel Wheels on the spacecraft which change the spacecraft attitude. red dwarfs Small, cool faint stars at the lower-right end of the main sequence on the H-R diagram, whose color and size give them their name. red giant star An evolved star that has exhausted the hydrogen fuel in its core and is powered by nuclear reactions in a hot shell around the stellar core. The diameter of a red giant is much larger than that of the Sun, and its surface temperature is relatively low, so that it glows with a red color. [More Info: Field Guide] red-giant branch The section of the evolutionary track of a star that corresponds to continued heating from rapid hydrogen shell burning, which drives a steady expansion and cooling of the outer envelope of the star. As the star gets larger in radius and its surface temperature cools, it becomes a red giant. red shift Change in the wavelength of light emitted from a source moving away from us. The relative recessional motion causes the wave to have an observed wavelength longer (and hence redder) than it would if it were not moving. The cosmological red shift is caused by the stretching of space as the universe expands. [More Info: Photo Album] red supergiant An extremely luminous and large red star. reddening Dimming of starlight by interstellar matter, which tends to scatter high-frequency (blue) components of the radiation more efficiently than the lower-frequency (red) components. reflecting telescope A telescope which uses a carefully designed mirror to gather and focus light from a distant object. refracting telescope A telescope which uses a lens to gather and focus light from a distant object. refraction The tendency of a wave to bend as it passes from one transparent medium to another. relativistic particle A particle moving at nearly the speed of light. relativity, general theory A theory formulated by Einstein that describes how a gravitational field can by replaced by a curvature of space-time. relativity, special theory A theory formulated by Einstein that describes the relations between measurements of physical phenomena by two different observers who are in relative motion at constant velocity. resolution In astronomy, "resolution" or "resolving power" refers to the ability of a telescope to distinguish details. "Angular resolution" refers to the ability to distinguish details in an image. For example, Chandra can distinguish details that are only half an arc second apart. If your eyes had similar resolving power, you could read the letters on a stop sign at a distance of 12 miles! "energy resolution" refers to the ability to distinguish the energies or wavelengths of photons. In visible light, this amounts to the ability to distinguish different colors. When Chandra makes an observation with the transmission gratings in place, it can distinguish thousands of different X-ray energies or colors. revolution Orbital motion of one body about another, such as the Earth about the Sun. right ascension Celestial coordinate used to measure longitude on the celestial sphere. The zero point is the position of the Sun on the vernal equinox. [More Info: Field Guide] Roche limit Often called the tidal stability limit, the Roche limit gives the distance from a planet at which the tidal force, due to the planet, between adjacent objects exceeds their mutual attraction. Objects within this limit are unlikely to accumulate into larger objects. The rings of Saturn occupy the region within Saturn's Roche limit. Roche lobe An imaginary surface around a star. Each star in a binary system can be pictured as being surrounded by a tear-shaped zone of gravitational influence, the Roche lobe. Any material within the Roche lobe of a star can be considered to be part of that star. During evolution, one member of the binary star can expand so that it overflows its own Roche lobe, and begins to transfer matter onto the other star. rotation Spinning motion of a body about an axis. rotation curve Plot of the orbital speed of disk material in a galaxy against its distance from the galactic center. Analysis of rotation curves of spiral galaxies indicates the existence of dark matter. RR Lyae star Variable star whose luminosity changes in a characteristic way. All RR Lyae stars have more or less the same period.
Dental X-rays—also called radiographs—are a widely-used preventive and diagnostic tool that your dentist uses to locate damage and disease that isn’t visible to the naked eye. X-ray procedures are typically performed yearly during your annual cleaning appointments. Receiving regular x-rays helps your dentist monitor and track the progress of your oral health. There are several types of dental x-rays, each capturing a slightly different view or angle of the affected area. The two most common forms of dental x-rays include intraoral—meaning the x-ray is filmed inside the mouth—and extraoral—meaning the x-ray is filmed outside the mouth. The most common form of digital radiography in dentistry is intraoral x-rays. Some examples of this technology include: - Bitewing: This type of x-ray offers a visual of both the lower and upper teeth. They are used to help you dentist locate decay between teeth. - Occlusal: These x-rays create a clear view of the floor of the mouth, which shows your dentist the bite of the upper and lower jaw. - Periapical: This detailed x-ray provides a view of the entire tooth, from the crown to the bone that supports it. - Panoramic: Showing an image of the teeth, jaws, nasal area, sinuses and joints, this type of x-ray is one of the most advanced imaging options available. These x-rays are typically performed in the office of a dentist or dental specialist. Special precautions are taken—such as wearing a lead-vest to protect against low-levels of radiation—and your entire procedure is monitored by a dental professional.
As you pick up your child from school each day, you naturally ask them what they did at school. Quite often their response is that they “just played”. That one little phrase can be quite disconcerting to parents who are looking at preschool as the beginning of their child’s academic career. They are anxious to find out what their child learned at school today. However, playing is how children develop social skills so critical for interpersonal connections. They learn to listen, take turns, and share. They may engage in pretend play in the dramatic play area, use their words to resolve conflict, or brainstorm ideas to enhance or change the way something works. Playing is how children strengthen their fine and gross motor skills. Activities such as putting puzzles together, stringing beads, cutting paper with scissors improve fine motor skills necessary for pre-writing dexterity. Gross motor skills are further developed through activities such as running, climbing, riding tricycles, and throwing and catching games. Children develop cognitive skills of critical thinking during play when they engage interactively in stories being read to them, participate in songs and fingerplays, and learn to follow multi-step directions. Successful play experiences in preschool will lay a solid foundation for all learning to come. Children are playing to learn and learning to play. So when your child says “I just played”, rest assured that they also developed increased physical coordination, learned self-reliance skills, explored creative expression and expanded language opportunities. Author Anita Wadley once wrote from a child’s perspective, “I’m preparing for tomorrow. Today, I’m a child and my work is play”. by Nancy Nathanson, Regional Education Director, Prime Time Early Learning Centers
What Part of the Respiratory System is Affected by Asthma? Asthma is a chronic condition that affects 10’s of thousands of people across the world. Although it isn’t curable it is controllable. ‘Asthma’ is the Greek word for pant or to breath hard. The Greek’s named it ‘asthma’ because of the wheezing sound which is diagnostic of the condition. Asthma is a chronic respiratory condition that arises from allergies or allergic responses in the lungs and is characterized by sudden attacks of laboured breathing, chest constriction and coughing. So what part of the respiratory system does asthma affect? The respiratory system supports the oxygen needs of the body by taking in air, removing the oxygen at the level of the alveoli (air sacs) and delivering the oxygen to the blood, which then transports the life supporting oxygen around the body. This is a continual system. The air is exchanged constantly – not just when you take air in. There are thousands of tiny air sacs that store the air and oxygen for use. The air is exchanged with each pass of the blood through the pulmonary system. During an asthmatic event the muscles surrounding the air tubules (bronchioles) constrict. This constriction doesn’t allow the air in the alveoli (air sacs) to be released and the lungs become over inflated. This over inflation forces the sufferer to cough in an attempt to get rid of the trapped air. If coughing doesn’t relieve the situation, or if the swelling and constriction becomes more severe, the sufferer begins to use their accessory breathing muscles. This causes the shoulders to hunch over and rise with each breath. What part of the respiratory system does asthma affect when the asthma sufferer is wheezing? The wheezing sound that is common to asthmatics is caused by the contraction of the bronchioles as the air passes through tubes that are almost completely blocked. During an asthmatic attack mediators from mast cells are released that causes the airway muscles to contract, which increases the mucus production and narrows the airways. This causes white blood cells to flood the area that keeps the attack going. This constriction of the airway, and the air forced through those tiny paths, is what causes the wheezing sound that can be heard either audibly or through a stethoscope. What part of the respiratory system does asthma affect before an asthmatic attack? Triggers for the physical changes are varied and many. They range from environmental allergens, occupational chemicals, weather changes, cold, scented items such as deodorants and perfumes. Some asthmatics are induced by exercise. The respiratory system is a complicated organ system that supports the body by delivering oxygen at the cellular level to help heal and provide life support. Taking care of the respiratory system helps to improve your health and take care of you.
For every ton of cement produced, the process creates approximately a ton of carbon dioxide, all of which accounts for roughly 7 percent of the world’s carbon dioxide emissions. Concrete surrounds us in our cities and stretches across the land in a vast network of highways. It’s so ubiquitous that most of us take it for granted, but many aren’t aware that concrete’s key ingredient, ordinary portland cement, is a major producer of greenhouse gases. Each year, manufacturers produce around 5 billion tons of portland cement — the gray powder that mixes with water to form the “glue” that holds concrete together. That’s nearly three-quarters of a ton for every person on Earth. For every ton of cement produced, the process creates approximately a ton of carbon dioxide, all of which accounts for roughly 7 percent of the world’s carbon dioxide emissions. And with demand increasing every year — especially in the developing world, which uses much more portland cement than the U.S. does — scientists are determined to lessen the growing environmental impact of portland cement production. One of those scientists is Gaurav Sant of the California NanoSystems Institute at UCLA, who recently completed research that could eventually lead to methods of cement production that give off no carbon dioxide, the gas that composes 82 percent of greenhouse gases. Sant, an associate professor of civil and environmental engineering and UCLA’s Edward K. and Linda L. Rice professor of materials science, found that carbon dioxide released during cement manufacture could be captured and reused. The study is published in the journal Industrial and Engineering Chemistry Research. “The reason we have been able to sustain global development has been our ability to produce portland cement at the volumes we have, and we will need to continue to do so,” Sant said. “But the carbon dioxide released into the atmosphere creates significant environmental stress. So it raises the question of whether we can reuse that carbon dioxide to produce a building material.” During cement manufacturing, there are two steps responsible for carbon emissions. One is calcination, when limestone, the raw material most used to produce cement, is heated to about 750 degrees Celsius. That process separates limestone into a corrosive, unstable solid — calcium oxide, or lime — and carbon dioxide gas. When lime is combined with water, a process called slaking, it forms a more stable compound called calcium hydroxide. And the major compound in portland cement is tricalcium silicate, which hardens like stone when it is combined with water. Tricalcium silicate is produced by combining lime with siliceous sand and heating the mixture to 1,500 degrees Celsius. Of the total carbon dioxide emitted in cement manufacturing, 65 percent is released when the limestone is calcined and 35 percent is given off by the fuel burned to heat the tricalcium silicate compound. But Sant and his team showed that the carbon dioxide given off during calcination can be captured and recombined with calcium hydroxide to recreate limestone — creating a cycle in which no carbon dioxide is released into the air. In addition, about 50 percent less heat is needed throughout the production cycle, since no additional heat is required to ensure the formation of tricalcium silicate. Sant said the method is analogous to how limestone cementation occurs in nature, where limestone forms the tough exoskeletons of coral, mollusks and seashells, and when microbes form limestone that cements grains of sand together. Although scientists had examined this idea previously, Sant said it had never been demonstrated before with a view to carbon dioxide-neutral cement production — and that it actually worked faster than he and his colleagues expected. The cycle took just three hours to complete, compared with the more than 28 days needed for portland cement to react with water to near completion and reach its final hardest consistency. The successful sample was very small, as required by laboratory conditions. But Sant said now that the process has been proven, it could, in time, be scaled up to production levels. If cement manufacturers continue to operate as they currently do, and if proposed carbon taxes in the U.S. and other nations are eventually enacted, cement production would be much more expensive than it is now. Were that to happen, a new method for producing cement with little or no environmental impact would be of even greater interest, Sant said. The study contributes to the goals of UCLA’s Sustainable LA Grand Challenge, a university-wide initiative to transition the Los Angeles region to 100 percent renewable energy local water and enhanced ecosystem health by 2050. Sant is also helping to develop the work plan for Sustainable LA. The study’s co-authors were Mathieu Bauchy, an assistant professor of civil and environmental engineering, Magdalena Balonis, a research scientist, postdoctoral scholars Kirk Vance and Isabella Pignatelli, and doctoral scholar Gabriel Falzone, all of UCLA. The research was supported by the National Science Foundation and was conducted in the Laboratory for the Chemistry of Construction Materials in the UCLA Henry Samueli School of Engineering and Applied Science, the Electron Imaging Center for Nanomachines at the California NanoSystems Institute, and the Molecular Instrumentation Center in UCLA’s department of chemistry and biochemistry.
The world can be unforgiving towards people who are deaf. The threat of social exclusion is always present. You cannot control how others think or behave, but you can control how you think. This article contains proactive thinking strategies. In the last edition, you were asked about a particularly unpleasant incident related to your child's deafness. This was designed to make you think about Reframing. How did you feel? Humiliated? Angry? Reframing is a cognitive skill that significantly influences a person's psychic makeup. The key idea is "It's not what happens to you, but how you view it." This determines how they perceive the world and, more importantly, how they behave. Studies have shown that people who reframe negative experiences are better able to create positive outcomes for others and themselves. How you are feeling in any given event significantly decides your behavior. Below is a thought map of how people commonly react to circumstances. It shows how Reframing works. Known as the ABC schema, it was defined by the psychologist Martin Seligman. Adversity and Consequences A → C "A" means Adversity and "C" means the emotional Consequence(s). Adversity is experiencing a negative incident. Examples are hurtful criticism, the discovery of a child's deafness, or a relationship breakdown. Yet, on a base level, actual events have nothing to do with emotional Consequence(s). That's not to blissfully ignore traumatic experiences, but to say that we are not passive pawns to happenstance. Reframing unpleasant or threatening circumstances requires critical thinking. The cognitive skill of Reframing, essentially, occurs in the Belief system. In between "A" and "C" is "B" - the Belief system. As such, the interpretation of the Adversity (A) by the Belief system (B) determines the Consequence(s) (C). Adversity, Belief System, and Consequences A → B → C Following the A-B-C sequence, the Belief system (B) determines emotions, which cause the Consequences (C). A does not cause C: B causes C. Irrational thoughts create negative emotions, which therefore cause negative behavioural outcomes. Reframing these beliefs increases the likelihood of favourable consequences. Rational thoughts create positive emotions, which therefore cause proactive behavioural outcomes. In conversation, Reframing requires a presence of mind and the ability to watch, observe, and then to respond. Reactive thinking views Adversity (A) as directly causing the emotional consequence (C). By contrast, the proactive mindset continually reframes Adversity in a positive manner. Threats are recognised but the instinct is to reframe, and then to find and pursue opportunities. The Belief system (B) reframes the Adversity (A) to create positive Consequences (C). The following thought processes were adopted from The Relaxation and Stress Reduction Workbook by Martha Davis, Matthew McKay and Elizabeth Eshelman (1982). Placed in a deafness context are an irrational Belief and the corresponding rational Belief for the same issue. Reframing is purposefully shifting from Irrational to Rational beliefs. People without a disability are also prone to irrational thinking. The Belief system, therefore, has nothing to do with deafness. Your thoughts, not deafness, determine your reality. It is not what happens to us, but how we view the circumstances we are dealing with. Irrational Beliefs can place unnecessary pressure on others and yourself. They can also lead to misinterpretations of reality and self-defeating thoughts. Misinterpretations of reality majorly cause of anger and irritation. For example, the thought "Society is to blame for my unhappiness" will likely put people off or, worse, make them pity your deafness and unhappiness. Also, thinking "Strangers should make an effort to accommodate my deafness" will lead to trouble when nobody helps you. Rational Beliefs, or proactive self-talk, acknowledge the issue and take on personal accountability. Rational Beliefs reframe thinking in a realistic, flexible or proactive manner. Rational Beliefs are crucial for conflict resolution. For example, asking someone to face you to assist speech-reading. Reframing is a skill that takes practice to master. It requires a presence of mind and the ability to watch, observe and then respond. Reframing is the fifth of eight themes that create Potential Maximisation. The exercise discussed is your practical application of Reframing. List five recent negative life experiences. Draw two columns. Write irrational/reactive beliefs in one column, reframe, then write rational/proactive beliefs in the other. Use this article as a guide. The following question prepares you for the next column's theme of Persistence. How many single words can you list about Darwin, Australia? Write as many as possible (e.g., crocodiles, Northern territory, etc.), however reasonable or far-fetched. "We do not see things as they are. We see them as we are." - The Talmud The contents of these columns are copyright of Dr. Paul Jacobs (PhD). All rights reserved. Reproduction of all or any substantial part of the contents in any form is prohibited. No part of Dr Paul Jacobs' material on Potential Maximisation may be distributed or copied for any commercial purpose without expressed approval by the author. 10-Nov-2015 3:17 PM (AEST)
Brains are the most powerful computers known. Now microchips built to mimic insects' nervous systems have been shown to successfully tackle technical computing problems like object recognition and data mining, researchers say. Attempts to recreate how the brain works are nothing new. Computing principles underlying how the organ operates have inspired computer programs known as neural networks, which have been used for decades to analyze data. The artificial neurons that make up these programs imitate the brain's neurons, with each one capable of sending, receiving and processing information. However, real biological neural networks rely on electrical impulses known as spikes. Simulating networks of spiking neurons with software is computationally intensive, setting limits on how long these simulations can run and how large they can get. To overcome these restraints, several groups around the world have started developing so-called "neuromorphic hardware" that use models of spiking neurons on microchips. For instance, Qualcomm released its Zeroth chip in October 2013. The company advertises the chip as part of its next generation of mobile devices for image and speech processing. A major advantage that brains have over conventional computers is how they can solve many problems in parallel simultaneously. However, conventional algorithms are often difficult to implement on neuromorphic hardware—novel algorithms that embrace the nature of brain-like computing architecture have to be used instead. "Biological neuronal networks that have been described by neuroscientists in the last few decades are a very rich source of inspiration for this task," neuroscientist and computer scientist Michael Schmuker at the Free University of Berlin tells Txchnologist. Now Schmuker and his colleagues have programmed neuromorphic hardware with a "neural network" inspired by elements of the nervous systems of insects. A system like the one the researchers designed "could be used as a building block for future neuromorphic supercomputers," Schmuker says. "These computers will operate much like the brain, performing all computations in a massively parallel fashion." The scientists relied on the Spikey neuromorphic microchip developed at the University of Heidelberg in Germany. The device can perform 10,000 times faster than its biological counterparts. A living model The researchers concentrated on the insect olfactory system, which deals with smell. "The olfactory system deals with a very complex input space—chemical space," Schmuker says. "This is reflected in its architecture, which supports parallel processing of a high number of different input channels." "The insect olfactory system is particularly suited as inspiration because it is less complex than its vertebrate counterpart, while its basic blueprint is very similar," he adds. To train artificial neural networks, researchers start by feeding them data. The neurons let investigators know when they have solved a given problem, such as correctly identifying a letter or digit. The network then alters the way data is relayed between these neurons, and the problems are tested again. Over time, the network figures out which arrangements between neurons are best at computing desired answers, mimicking how real brains learn. They had their system tackle the problem of classifying multivariate data—that is, data containing several variables. This is a common need in signal and data analysis. "We implemented the solution to a practical computing problem on a neuromorphic chip," Schmuker says. "There are many theoretical proofs of concept for neuromorphic computing out there, but only very few examples that indeed are implemented on actual neuromorphic hardware." The three-step approach the microchip used to classify data mimics the anatomy and function of the insect olfactory system. First, the scientists converted real-world multivariate data into a series of spikes they fed into their chip. One set of data described features of the blossom leaves for three species of the iris flower; the other contained handwritten images of the digits 5 and 7, digitized to 28 by 28 pixels. Next, the researchers filtered and preprocessed this raw data using a technique known as lateral inhibition. "Lateral inhibition describes a certain connection pattern in a neuronal network," Schmuker says. In this case, groups of artificial neurons each receive different inputs and mutually inhibit their activity. "As a result, the activity of those neurons which was high in response to a particular stimulus get stronger, and the lower-activity neurons will become weaker. Lateral inhibition thus is similar to a filter that enhances contrast." This filtered data is then fed to a final level of artificial neurons. Classifying data involves assigning each piece of information one label out of many. These artificial neurons are arranged in as many groups as there are labels—if the labels are for all the fingers on both hands, for example, there are 10 labels. The neuron group that received the strongest output from the previous step completely suppresses the activity of the other neuron groups. "This is how we achieve that only one label is assigned to each stimulus," he says. Progress and problems As they implemented the system, the researchers uncovered technical challenges that are not obvious when doing simulations on conventional computers. For instance, the electronics on their chip could be quite variable in nature, causing a difficult problem "that we eventually solved using a particular way of connecting groups of neurons," Schmuker says. The researchers found the neural network on the neuromorphic microchip could achieve the same level of accuracy as the neural network when run on conventional computers. At the same time, their system was about 13 times faster than comparable biological systems. The scientists noted that neuromorphic computers would not replace conventional computers. "Rather, we are developing a new brain-inspired technique for computing that will be able to solve problems for which conventional computers are not well-suited," Schmuker says. Future research will focus "on identifying problems that are particularly well-suited for the neuromorphic approach," said he says. "The more neuromorphic hardware becomes available, the more interesting neuromorphic computing challenges will be identified." Schmuker and his colleagues detailed their findings online Jan. 27 in the journal Proceedings of the National Academy of Sciences. Lead image: Mulberry borer via Shutterstock.
"All duh people wut come from africa aw oberseas wuz call Golla and dey talk wut call Golla talk." - Georgia,1936 Until quite recently, it was commonly believed that those who spoke Gullah were speaking what many termed broken English. Few realized that this language is living evidence of a remarkable transformation that took place from Africa to African American culture. People speaking Gullah is a testimony to one of the great acts of human endurance in the history of the world, the survival of African people away from home. In the early times, slave holders and their visitors on the rice plantations often commented on the presence of the distinct language among the slave population. They had no idea that they were witnesses to a cultural phenomenon. Right before their eyes were the transformation, adaptation and persistence of a culture. During the times, our people came from different language and culture groups, and geographical regions. They were brought here to be the main labor force in the rice and cotton industries, responsible for the planting, hoeing, ditching, pounding, plowing, basket making, winnowing, picking, and threshing. It goes without saying that communication was necessary for survival and execution. The language that we developed was born on African soil as a pidgin, an auxiliary language. As in case with pidgins, it was developed for communication purposes, spoken among various African groups in business transactions and intertribal affairs. By the height of the slave trade, pidgins were firmly placed among African groups. When different Africans were captured and housed together in West Coast holding cells, the pidgins spoken in freedom, became their method of communication in captivity. As time went on, the main auxiliary language combined the most prominent pidgins, other linguistics features and speech patterns common among them with the English words and vocabulary spoken to and about them by the master class. This creolization set the stage, on African soil, for what is now still spoken and called Gullah. It was sustained because of the large numbers of Africans on rice and Sea Island cotton plantations, the isolation that characterized the regions along the coast and the continued influx of pure Africans smuggled into these isolated areas after the slave trade was prohibited. The lanuage as it exists today still contains African words and language features that can be traced to African groups today. The absence of the verb to be, final t's , and the use of only two pronouns 'e ( he, she it) and onna (you, us, them) bears witness to the fact that what ever its history, the Gullah language has its own flavor, rules and regulations. Excerpt from The Ultimate Gullah Cookbook by Jesse Edward Gantt, Jr. and Veronica D.Gerald
Two theories that are similar in many aspects as well as different are Confucianism, and Mill's Utilitarianism. Both of these theories make valid points, and have strong proof. In this essay I will compare, and contrast Confucianism with Mill's theory. I will briefly explain the two theories as well as point out the similarities between the two theories, as well as the differences. Confucianism has serves as primary role as a social and moral philosophy. Confucius was concerned with humans in their social setting. The teachings of Confucius served to unite a developing society, binding together various aspects of civilization and culture into one coherent body that functions under common values and attitudes. Confucianism aims at making not simply the man of virtue, but the man of learning and of good manners. The perfect man must combine the qualities of saint, scholar, and gentleman. In Confucianism there are many beliefs and practices that stress on different things. Confucian ethical teachings include the following values: Li: includes ritual, propriety, etiquette, etc. Jen: benevolence, humaneness towards others; the highest Confucian virtue, Li: which consists of principles, and organizing, Intelligent: which deal with well educated, one knows the right, and moral thing to do, and Propriety: which deals with proper conduct and following of rituals. There are also other beliefs in the Confucian society that they follow and live by. All humanity is good and always striving to be better, be loyal and live upright. Confucian's put an emphasis on sympathizing over others when they are suffering. They are always searching for a higher sense of sympathy for people. This belief system also entails the belief that the ultimate personal harmony in life is the relationships one has with: ruler to subject, parent to child, husband to wife, older to younger, and friend-to-friend. Confucianism teaches the importance of harmony in the family, order in the state and peace in the empire, which they see as inherently interdependent.
Systems of Linear Inequalities One linear inequality in two variables divides the plane into two half-planes . To graph the inequality, graph the equation of the boundary. Use a solid line if the symbol or is used because the boundary is included in the solution. Use a dashed line if is used to indicate that the boundary is not part of the solution. Shade the appropriate region. Unless you are graphing a vertical line the sign of the inequality will let you know which half-plane to shade. If the symbol or is used, shade above the line. If the symbol or is used shade below the line. For a vertical line, larger solutions are to the right and smaller solutions are to the left. A system of two or more linear inequalities can divide the plane into more complex shapes. Graph the system of linear inequalities. Graphing the three lines and shading the region enclosed, we get the figure below.
Seven reasons we still need to fight for women’s human rights Human rights are the basic minimum protections which every human being should be able to enjoy. But historically not all people have been able to enjoy and exercise their rights in the same way. The result is unequal treatment. One such group is women and girls. Throughout history, women have been afforded fewer rights than their male counterparts or have had to work harder to realise their rights in practice. Viewing women’s rights as human rights has been fundamental in the struggle to ensure that women are treated fairly. As part of our series for International Women’s Day, we are taking a look at how women have fought to be put on an equal footing. 1. Women weren’t even people, legally speaking A British court once actually had to declare that women counted as ‘persons’ in order for them to receive the same treatment as men. In 1929, a woman named Emily Murphy applied for a position in the Canadian Senate (a house of the Canadian Parliament). She was refused because women were not at the time considered ‘persons’ under section 24 of the British North America Act 1867. This understanding was based on a British ruling from 1876 which stated that women were 'eligible for pains and penalties, but not rights and privileges'. Emily Murphy took her case to the Privy Council, the court of last resort in the British Empire. The judges declared that women were ‘persons’ who could sit in the Canadian Senate. One of the judges, Lord Sankey, said: 'to those who ask why the word “person” should include females, the obvious answer is why should it not?' 2. Married women were the same legal person… as their husband In 1765, a famous legal commentator, Sir William Blackstone, wrote that after marriage, the 'very being or legal existence of the woman is… incorporated and consolidated into that of her husband'. In other words, a married woman did not, legally speaking, exist separately from her husband. When a woman married, all of her property was automatically placed under the control of her husband. In 1870, an Act of Parliament allowed married women to keep money they earned and to inherit certain property. In 1882, this was extended to allow wives a right to own, buy and sell property in their own right. In 1893 married women were granted control of any property they acquired during marriage. Our enjoyment of property is recognised as a human right, subject to certain limitations, under Article 1, Protocol 1 of the European Convention on Human Rights. 3. Women had to fight really hard for the right to vote Before 1918, women were not allowed to vote in parliamentary elections. This meant that they had no say in choosing the people who made law, and those law-makers had no political incentive to care about women, since they did not need to win their votes. In the early 20th century, activist groups campaigned for women’s right to vote (‘suffrage’). One such group was the suffragettes. The term ‘suffragette’ was first used by the Daily Mail in 1906. It was intended as a derogatory name for an activist group run by Emmeline Pankhurst and her daughters. In 1913, suffragette Emily Wilding Davison was fatally injured after she ran up to the King’s horse, racing at the Epsom Derby. In 1918, the Representation of the People Act first gave women over age 30 the right to vote if they or their husband met a property qualification. The Parliament (Qualification of Women) Act also allowed women to stand for election as Members of Parliament. In took until 1928 (the Equal Franchise Act) for all women in Britain to gain equal voting rights with men. The right to vote and stand for election are recognised as human rights under Article 3, Protocol 1 of the European Convention on Human Rights. 4. Women still don’t have access to education In 1878, the University of London became the first university in the UK to open its degrees to women. In 1880, four women became the first to obtain degrees when they were awarded Bachelors of Arts by the University. Nowadays, millions of women and girls around the world are still systematically excluded from even basic education. The right to access educational institutions without discrimination is a human right under Article 2, Protocol 1 of the European Convention on Human Rights. 5. Women had to fight to access their children and plan their families Before 1839, mothers had no rights at all in relation to their children if their marriage broke down. In 1836, Caroline Norton left her husband, George, who had been abusive towards her. After the separation, George refused Caroline access to their sons. After much campaigning, an Act of Parliament was passed in 1839 giving mothers the right to ask for custody of their children. In the late 20th century, woman gained greater control over whether or not to have children. Initially, it was a criminal offence in the UK to perform an abortion or to try and self-abort . This led to a high number of unsafe back-street abortions – a major cause of pregnancy-related deaths. To address this, Parliament passed the Abortion Act 1967 to permit abortions under medical supervision and subject to certain criteria. In 1974, contraception also became freely available to all women irrespective of marital status through the NHS. The right to respect for family life and bodily integrity are protected under Article 8 of the European Convention on Human Rights. The European Court recently ruled that women in Ireland do not have sufficient access to abortion facilities. 6. Women need legal protection from violence, including in the home In 1878, the law first said that a woman could obtain an order allowing her to separate from her husband if her husband subjected her to violence. In 1976, an Act of Parliament allowed women in danger of domestic violence to obtain the court’s protection from their violent partner. It used to be thought that a husband could not rape his wife. But in 1991, rape in marriage was, for the first time, declared a crime. In a court case called R v R, a husband was convicted of attempted rape. He appealed, arguing that, when a woman gets married, she impliedly consents to having sex. The court rejected this, saying implied consent was a 'fiction' which 'has no useful purpose today in the law'. The case has been used by the European Court of Human Rights to justify gradual, progressive changes in the common law. Nowadays, female genital mutilation (‘FGM’) is a major issue worldwide, including in the UK. FGM is the dangerous practice of causing injury to the female genital organs for non-medical reasons. It is typically done for cultural reasons and is prevalent in Africa, the Middle East and Asia. It is estimated that FGM affects 137,000 women in the UK. The practice is illegal in the UK under the Female Genital Mutilation Act 2003. It is also now illegal to arrange for a child to be taken abroad for FGM. 7. Women (still) struggle to achieve equality in the workplace In 1968, women at the Ford car factory in Dagenham took part in a strike for equal pay, almost stopping production at all Ford UK plants. Their protest led to the passing of the Equal Pay Act 1970, though they had to wait until 1983 (the Equal Pay (Amendment) Regulations) for a legal entitlement to equal pay for work of equal value. The Equality Act 2010 consolidated the law protecting women from discrimination on the grounds of sex or maternity in the workplace. In 2016, the government published a consultation on proposals for a law to require certain companies in England, Scotland and Wales to publish gender pay gap statistics. Protection from discrimination is a human right under Article 14 of the European Convention on Human Rights. So, how far have we still got to go for equality? In 1977, the United Nations General Assembly declared International Women’s Day an annual event. In 2015, the Word Economic Forum predicted that global gender parity (that is, equality) would, at the current rate of progress, not be achieved until 2133: 117 years from now. Many women’s rights have been hard won over a centuries-long struggle for equality. Do we have to wait another century before we can finally say women are equal? Our blogs are written by Amnesty International staff, volunteers and other interested individuals, to encourage debate around human rights issues. They do not necessarily represent the views of Amnesty International.
Click here to be directed to the GATE Home Page. Extending and Differentiating the California Common Core Standards The Common Core Standards use educational concepts that have been the focus of gifted education for many years. These concepts stress rigor, depth, complexity, relevance, and deeper understanding. However, as stated by the Common Core State Standards Initiative, The Standards set grade-specific standards but do not define the intervention methods or materials necessary to support students who are well below or well above grade-level expectations...The Standards do not define the nature of advanced work for students who meet the Standards prior to the end of high school. For those students, advanced work...should be available. gifted education has developed effective strategies to both meet the needs of a wide variety of learners (differentiation) and create the mindset and metacognition to meet challenge. The pedagogy needed to support and extend the complex curriculum of CCSS is already a core part of GATE Curriculum and Instruction. |All GATE teachers Should: | - Understand the issues in definitions theories, & identification of Advanced learners, including those from diverse backgrounds. - Recognize the learning differences, developmental Milestones, and cognitive/affective characteristics of gifted and talented students, & identify their related academic and social-emotional needs - Understand, plan, & implement a range of evidence-based & differentiated strategies National Association for Gifted Children: Common Core State Standards and Gifted Education California Department of Education: Gifted and Talented Education program information, laws & regulations, resources California Association for the Gifted: Common Core and the GATE Standards
Your lesson will take place in Dancing Wings Butterfly Garden, a glass-enclosed garden filled with live foliage, a cascading waterfall, and colorful, free-flying North American and tropical butterflies. This lesson reminds students about the process of change in the life of a butterfly and allows them an opportunity to see the world through new eyes. Lesson extensions for before or after your visit The following activities are designed for your class to enjoy before or after your museum visit. Familiarizing students with the lesson concepts can enrich your museum experience. Read the story The Very Hungry Caterpillar by Eric Carle. Have students act out the movements of the caterpillar as you read the words. Students may enjoy making a sequenced painting of the three stages that the caterpillar went through during the story. The process of change - Have students witness the process of change by creating one of the following situations to observe: - Ice melting - Water evaporating on a chalkboard - Bean sprouting - Flower wilting - Have students bring in baby pictures of themselves; the teacher should bring in one too! Take turns talking about how we have changed and how we are still the same. Have fun with students while they pretend to be butterflies at school. Throughout the day help them to imagine how a butterfly might do the things they do. How would a butterfly sit in its chair? What would a butterfly do on the playground? What would a butterfly eat for lunch and how would he or she eat it? Students could write a group poem entitled “If I were a butterfly...” Have each student finish the sentence and illustrate it to hang up in the room.
Primary Sources and Secondary Sources What is a Primary Source? A primary source is a document that was created at the time of the event or subject you’ve chosen to study or by people who were observers of or participants in that event or topic. If, for example, your topic is the experience of workers in the Chicago packinghouses during the first decades of the twentieth century, your primary sources might be: - Chicago newspapers, c. 1900-1920, in a variety of languages. - A short film, such as an actualité, made during the period that shows the yards. - Settlement house records and manuscripts. - Novels about the packing yards, such as Upton Sinclair’s The Jungle (1906). - U.S. census records concerning neighborhood residents for 1900 and 1910. - A mechanical conveyor system, used to move carcasses from one room to another at the time and place you are researching. - Autobiographies of meat packing executives, workers, etc., published even many years later. - Maps that show the location of the packing house plants, made during the period you are studying. - Music, such as work songs or blues ballads, made or adapted during the time you are researching. - oral histories of packing house employees’ experiences, though a historian’s comments on those oral histories would be a secondary source. The medium of the primary source can be anything, including written texts, objects, buildings, films, paintings, cartoons, etc. What makes the source a “primary” source is when it was made, not what it is. Primary sources would not, however, include books written by historians about this topic, because books written by historians are called “secondary” sources. The same goes for historian’s introductions to and editorial comments on collections of primary documents; these materials, too, are secondary sources because they’re twice removed from the actual event or process you’re going to be writing about. So while a historian’s introduction to Upton Sinclair’s novel The Jungle (1906) is a secondary source, the novel itself, written in 1906, is a primary source. What are Secondary Sources? Once you have a topic in mind, you need to find out what other scholars have written about your topic. If they’ve used the same sources you were thinking of using and reached the same conclusions, there’s no point in repeating their work, so you should look for another topic. Most of the time, though, you’ll find that other scholars have used different sources and/or asked different questions, and that reading their work will help you place your own paper in perspective. You want to move past just looking for books in the library. Now that you’re doing your own history research and writing, you should step up to the specialized bibliographies historians use for their own work. Don’t stop looking for secondary sources until you begin to turn up the same titles over and over again. Put those titles you see most frequently and those that are most recently published at the very top of your list of things to read, since they are likely to be the most significant and/or complete interpretations. After you’ve located and analyzed some primary sources and read the existing secondary literature on your topic, you’re ready to begin researching and writing your paper. Remember: when lost, confused etc., ask a reference librarian! They are there to help. [adapted in part from Peggy Pascoe’s site at the University of Oregon] Questions to Consider When Reading Primary Historical Documents - When and by whom was this particular document written? What is the format of the document? Has the document been edited? Was the document published? If so, when and where and how? How do the layout, typographical details, and accompanying illustrations inform you about the purpose of the document, the author’s historical and cultural position, and that of the intended audience? - Who is the author, and why did he or she create the document? Why does the author choose to narrate the text in the manner chosen? Remember that the author of the text (i.e., the person who creates it) and the narrator of the text (i.e. the person who tells it) are not necessarily one and the same. - Using clues from the document itself, its form, and its content, who is the intended audience for the text? Is the audience regional? National? A particular subset of “the American people”? How do you think the text was received by this audience? How might the text be received by those for whom it was NOT intended? - How does the text reflect or mask such factors as the class, race, gender, ethnicity, or regional background of its creator/narrator? (Remember that “race” is a factor when dealing with cultural forms of people identified as “white,” that “men” possess “gender,” and that the North and Midwest are regions of local as well as national significance.) - How does the author describe, grapple with, or ignore contemporaneous historical events? Why? Which cultural myths or ideologies does the author endorse or attack? Are there any oversights or “blind spots” that strike you as particularly salient? What cultural value systems does the writer/narrator embrace? - From a literary perspective, does the writer employ any generic conventions? Use such devices as metaphor, simile, or other rhetorical devices? - With what aspects of the text (content, form, style) can you most readily identify? Which seem most foreign to you? Why? Does the document remind you of contemporaneous or present-day cultural forms that you have encountered? How and why? Asking a Good Historical Question; Or, How to Develop a Manageable Topic When writing a historical research paper, your goal is to choose a topic and write a paper that - Asks a good historical question - Tells how its interpretation connects to previous work by other historians, and - Offers a well-organized and persuasive thesis of its own. Let’s take this one step at a time. - Asking a good historical question: A good historical question is broad enough to interest you and, hopefully, your classmates. Pick a topic that students in the class and average people walking down the street could find interesting or useful. If you think interracial relationships are an interesting topic and you find the 1940s to be an equally fascinating time period, come up with a question that incorporates both these interests. For example: “How did white and African-American defense plant workers create and think about interracial relationships during World War Two?” This question investigates broad issues—interracial romance, sexual identity—but within a specific context—World War Two and the defense industry. WARNING: Avoid selecting a topic that is too broad: “How has war affected sex in America?” is too broad. It would take several books to answer this question. A good question is narrow enough so that you can find a persuasive answer to it in time to meet the due date for this class paper. After selecting a broad topic of interest, narrow it down so that it will not take hundreds of pages to communicate what happened and why it was important. The best way write a narrow question is to put some limitations on the question’s range. Choosing a particular geographic place (a specific location), subject group (who? what groups?), and periodization (from when to when?) are the most common ways to limit a historical question. The example above already contains a limited subject group (whites and African-Americans) and a short time period (WWII, 1941-1945); simply adding a place, such as “in the Bay Area” or “in Puget Sound” further narrows the topic: “How did white and African-American defense plant workers in the San Francisco Bay area create and think about interracial relationships during World War Two?” is a much more manageable question than one that addresses all defense workers. WARNING: Avoid a question that only looks at one specific event or process. For example, “What happened on Thursday, Dec.12, 1943 at the Boeing bomber plant in Albany, California?” is too narrow. Perhaps there may have been several important events that day, including a fight over an interracial relationship. However, this question does not position you to explore the larger processes that were taking place in the plant over time, nor why they are important for understanding sex, race and gender in American history. A good historical question demands an answer that is not just yes or no. Why and how questions are often good choices, and so are questions that ask you to compare and contrast a topic in different locations or time periods; so are questions that ask you to explain the relationship between one event or historical process and another. Examples (why and how, compare/contrast, explanatory): - “Why and how did Latina women in Texas challenge their traditional sexual identities in the 1960s?” or “Why and how did captivity narratives define interracial romance in colonial America?” - “Gay liberation over time and space: The Stonewall Uprising and Harvey Milk assassination protests compared;” or “Sex and gender after the war is over: The contrast between the post-World War One and World War Two eras.” - “Go West, Young Woman: the rise of the popular newspaper, western boosterism, and the origins of women in professional journalism;” or “Sit-coms, kitchens, and Mom: TV and the redefinition of femininity and domesticity, 1950-1975.” A good historical question must be phrased in such a way that the question doesn’t predetermine the answer. Let’s say you’ve decided to study the Tillamook Ku Klux Klan. You’re fascinating by the development of the Klan, and repelled by its ideas, so the first question you think about asking is “Why was the Klan so racist?’ This is not a good historical question, because it assumed what you ought to prove in your paper that the Klan was racist. A better question so ask would be “What was the Klan’s attitude and behavior toward African Americans and immigrants, and why?” - Connecting your interpretation to previous work by other historians: Once you have a topic in mind, you need to find out what other scholars have written about your topic. If they’ve used the same sources you were thinking of using and reached the same conclusions, there’s no point in repeating their work, so you should look for another topic. Most of the time, though, you’ll find that other scholars have used different sources and/or asked different questions, and that reading their work will help you place your own paper in perspective. When you are writing your paper, you will cite these historians—both their arguments about the material, and also (sometimes) their research findings. Example: “As Tera Hunter has argued concerning Atlanta’s laundresses, black women workers preferred work outside the homes of their white employers”(and then you would cite Hunter in a footnote, including page numbers). - Offering a well-organized and persuasive thesis. Think of your thesis as answering a question. Have your thesis answer a “how” or “why” question, rather than a “what” question. A “what” question will usually land you in the world of endless description, and while some description is often necessary, what you really should focus on is your thinking, your analysis, your insights. Consider the following questions when reviewing your thesis paragraphs: - Does the thesis answer a research question? - What sort of question is the thesis answering? The thesis paragraph usually has three parts: (1) the subject of your paper, (2) your argument about the topic, and (3) the evidence you’ll be using to argue your thesis. - Is the thesis overly descriptive? Does it simply describe something in the past? OR, - Does the thesis present an argument about the material? (This is your goal.) - Is the thesis clearly and succinctly stated? - Does the thesis paragraph suggest how the author plans to make his or her argument? Examples of Thesis Statements: From Bad to Better “Dorothy Richardson’s The Long Day is a provocative portrayal of working class women’s lives in the early part of the twentieth century.” This is a weak thesis for a paper, since it is overly vague and general, and is basically descriptive in nature. The thesis does not suggest why or how Richardson’s book is “provocative.” “The narrator of Dorothy Richardson’s 1905 work, The Long Day, exemplifies many ideas and perspectives of the early twentieth century’s new feminism.” This is a bit better, since the author is actually suggesting that there might be an argument about early twentieth-century feminism. But note how the language is still vague. What ideas and perspectives? To what effect does Richardson’s work deal with these ideas? “While The Long Day’s narrator exemplifies many tenets of the new feminism, such as a commitment to women’s economic independence, her feminist sympathies are undermined by her traditional attitudes towards female sexual expression.” OK. Now we are getting somewhere! This is a solid thesis. Note that the language is specific (commitment to women’s economic independence, as example). Also, the author has detected a contradiction in the text, a tension that the paper can fruitfully analyze. It could be strengthened further by suggesting HOW Richardson’s sympathies are undermined by her traditional attitudes. How to Document Your Sources In history courses, you should use the traditional endnote or footnote system with superscript numbers when citing sources. Do not use parenthetical author-page numbers as a general rule. Exceptions include: short discussion assignments; five page analytical papers where you have been assigned the specific texts that you are analyzing. The preferred guide for citations in history is The Chicago Manual of Style. The University of Wisconsin’s writing center page offers a helpful introduction to the traditional method of citing sources laid out in The Chicago Manual. Also visit U of T’s advice file on documenting sources for a concise overview on the traditional method.
When molecules of any kind of atoms get more energy in them than they had before, they move faster, and we call that “heat”. When the molecules move faster, they hit against each other and bounce apart, and so they end up further apart from each other than they were before. The energy can come from sunshine, or from volcanoes, or from friction, or nuclear fusion, or many other sources. Things are hot if their molecules are moving quickly, and cold if their molecules are moving more slowly. Temperature is a way of measuring how fast the molecules are moving. Hot and Cold, Fast and Slow If two things are touching each other, heat will flow from the hotter one to the colder one, unless they are the same temperature. The molecules of the hotter one will slow down, and the molecules of the colder one will speed up, until they are all moving at the same speed, and the two things have the same temperature. For instance, if you put an ice cube in your mouth, your mouth will get colder, and the ice cube will melt. When molecules are cold, they are not moving very much. Then they are usually very close together, like a crowd of people standing still. But as they heat up and move faster, the molecules begin to bump into each other and bounce off, and then they bump into other molecules. Soon they are farther apart from each other. Because of this, hot things have more space between the molecules than cold things, and the same number of molecules take up more space when they are hot. If you boil water in a pot, it will turn into steam. What’s the temperature in Space? Because hot and cold are ways of talking about how fast molecules are moving, and there are so few molecules in space, space really doesn’t have any temperature at all. We think of space as being cold, though, because if you put something made of molecules in space, like yourself for instance, you would get cold as your heat moved away from you, evening out the energy difference between yourself and the space around you. - Heat travels from a warmer material to a colder one, never the other way around! - Some materials let heat pass through them easily – they are called conductors. Other materials don’t let heat pass through them – they are called insulators - Heat is NOT the same as temperature, which is measured with a thermometer. Heat is usually measured with a calorimeter. Check out these amazing facts about the sun!
The Elements of Style/Principles |←Elementary Rules of Usage||The Elements of Style by III. Elementary Principles of Composition |A Few Matters of Form→| |Jump to: Contents - Introduction - Rules - Principles - Form - Misuse - Misspelling| 9. Make the paragraph the unit of composition: one paragraph to each topic. If the subject on which you are writing is of slight extent, or if you intend to treat it very briefly, there may be no need of subdividing it into topics. Thus a brief description, a brief summary of a literary work, a brief account of a single incident, a narrative merely outlining an action, the setting forth of a single idea, any one of these is best written in a single paragraph. After the paragraph has been written, it should be examined to see whether subdivision will not improve it. Ordinarily, however, a subject requires subdivision into topics, each of which should be made the subject of a paragraph. The object of treating each topic in a paragraph by itself is, of course, to aid the reader. The beginning of each paragraph is a signal to him that a new step in the development of the subject has been reached. The extent of subdivision will vary with the length of the composition. For example, a short notice of a book or poem might consist of a single paragraph. One slightly longer might consist of two paragraphs: - Account of the work. - Critical discussion. A report on a poem, written for a class in literature, might consist of seven paragraphs: - Facts of composition and publication. - Kind of poem; metrical form. - Treatment of subject. - For what chiefly remarkable. - Wherein characteristic of the writer. - Relationship to other works. The contents of paragraphs C and D would vary with the poem. Usually, paragraph C would indicate the actual or imagined circumstances of the poem (the situation), if these call for explanation, and would then state the subject and outline its development. If the poem is a narrative in the third person throughout, paragraph C need contain no more than a concise summary of the action. Paragraph D would indicate the leading ideas and show how they are made prominent, or would indicate what points in the narrative are chiefly emphasized. A novel might be discussed under the heads: A historical event might be discussed under the heads: - What led up to the event. - Account of the event. - What the event led up to. In treating either of these last two subjects, the writer would probably find it necessary to subdivide one or more of the topics here given. As a rule, single sentences should not be written or printed as paragraphs. An exception may be made of sentences of transition, indicating the relation between the parts of an exposition or argument. In dialogue, each speech, even if only a single word, is a paragraph by itself; that is, a new paragraph begins with each change of speaker. The application of this rule, when dialogue and narrative are combined, is best learned from examples in well-printed works of fiction. 10. As a rule, begin each paragraph with a topic sentence; end it in conformity with the beginning. Again, the object is to aid the reader. The practice here recommended enables him to discover the purpose of each paragraph as he begins to read it, and to retain the purpose in mind as he ends it. For this reason, the most generally useful kind of paragraph, particularly in exposition and argument, is that in which - the topic sentence comes at or near the beginning; - the succeeding sentences explain or establish or develop the statement made in the topic sentence; and - the final sentence either emphasizes the thought of the topic sentence or states some important consequence. Ending with a digression, or with an unimportant detail, is particularly to be avoided. If the paragraph forms part of a larger composition, its relation to what precedes, or its function as a part of the whole, may need to be expressed. This can sometimes be done by a mere word or phrase (again; therefore; for the same reason) in the topic sentence. Sometimes, however, it is expedient to precede the topic sentence by one or more sentences of introduction or transition. If more than one such sentence is required, it is generally better to set apart the transitional sentences as a separate paragraph. According to the writer's purpose, he may, as indicated above, relate the body of the paragraph to the topic sentence in one or more of several different ways. He may make the meaning of the topic sentence clearer by restating it in other forms, by defining its terms, by denying the converse, by giving illustrations or specific instances; he may establish it by proofs; or he may develop it by showing its implications and consequences. In a long paragraph, he may carry out several of these processes. |1 Now, to be properly enjoyed, a walking tour should be gone upon alone.||1 Topic sentence.| |2 If you go in a company, or even in pairs, it is no longer a walking tour in anything but name; it is something else and more in the nature of a picnic.||2 The meaning made clearer by denial of the contrary.| |3 A walking tour should be gone upon alone, because freedom is of the essence; because you should be able to stop and go on, and follow this way or that, as the freak takes you; and because you must have your own pace, and neither trot alongside a champion walker, nor mince in time with a girl.||3 The topic sentence repeated, in abridged form, and supported by three reasons; the meaning of the third ("you must have your own pace") made clearer by denying the converse.| |4 And you must be open to all impressions and let your thoughts take colour from what you see.||4 A fourth reason, stated in two forms.| |5 You should be as a pipe for any wind to play upon.||5 The same reason, stated in still another form.| |6 "I cannot see the wit," says Hazlitt, "of walking and talking at the same time.||6-7 The same reason as stated by Hazlitt.| |7 When I am in the country, I wish to vegetate like the country," which is the gist of all that can be said upon the matter.| |8 There should be no cackle of voices at your elbow, to jar on the meditative silence of the morning.||8 Repetition, in paraphrase, of the quotation from Hazlitt.| |9 And so long as a man is reasoning he cannot surrender himself to that fine intoxication that comes of much motion in the open air, that begins in a sort of dazzle and sluggishness of the brain, and ends in a peace that passes comprehension. —Stevenson, Walking Tours.||9 Final statement of the fourth reason, in language amplified and heightened to form a strong conclusion.| |1 It was chiefly in the eighteenth century that a very different conception of history grew up.||1 Topic sentence.| |2 Historians then came to believe that their task was not so much to paint a picture as to solve a problem; to explain or illustrate the successive phases of national growth, prosperity, and adversity.||2 The meaning of the topic sentence made clearer; the new conception of history defined.| |3 The history of morals, of industry, of intellect, and of art; the changes that take place in manners or beliefs; the dominant ideas that prevailed in successive periods; the rise, fall, and modification of political constitutions; in a word, all the conditions of national well-being became the subjects of their works.||3 The definition expanded.| |4 They sought rather to write a history of peoples than a history of kings.||4 The definition explained by contrast.| |5 They looked especially in history for the chain of causes and effects.||5 The definition supplemented: another element in the new conception of history.| |6 They undertook to study in the past the physiology of nations, and hoped by applying the experimental method on a large scale to deduce some lessons of real value about the conditions on which the welfare of society mainly depend. —Lecky, The Political Value of History.||6 Conclusion: an important consequence of the new conception of history.| In narration and description the paragraph sometimes begins with a concise, comprehensive statement serving to hold together the details that follow. |The breeze served us admirably.| |The campaign opened with a series of reverses.| |The next ten or twelve pages were filled with a curious set of entries.| But this device, if too often used, would become a mannerism. More commonly the opening sentence simply indicates by its subject with what the paragraph is to be principally concerned. |At length I thought I might return towards the stockade.| |He picked up the heavy lamp from the table and began to explore.| |Another flight of steps, and they emerged on the roof.| The brief paragraphs of animated narrative, however, are often without even this semblance of a topic sentence. The break between them serves the purpose of a rhetorical pause, throwing into prominence some detail of the action. 11. Use the active voice. The active voice is usually more direct and vigorous than the passive: |I shall always remember my first visit to Boston.| This is much better than |My first visit to Boston will always be remembered by me.| The latter sentence is less direct, less bold, and less concise. If the writer tries to make it more concise by omitting "by me," |My first visit to Boston will always be remembered,| it becomes indefinite: is it the writer, or some person undisclosed, or the world at large, that will always remember this visit? This rule does not, of course, mean that the writer should entirely discard the passive voice, which is frequently convenient and sometimes necessary. |The dramatists of the Restoration are little esteemed to-day.| |Modern readers have little esteem for the dramatists of the Restoration.| The first would be the right form in a paragraph on the dramatists of the Restoration; the second, in a paragraph on the tastes of modern readers. The need of making a particular word the subject of the sentence will often, as in these examples, determine which voice is to be used. The habitual use of the active voice, however, makes for forcible writing. This is true not only in narrative principally concerned with action, but in writing of any kind. Many a tame sentence of description or exposition can be made lively and emphatic by substituting a transitive in the active voice for some such perfunctory expression as there is, or could be heard. |There were a great number of dead leaves lying on the ground.||Dead leaves covered the ground.| |The sound of the falls could still be heard.||The sound of the falls still reached our ears.| |The reason that he left college was that his health became impaired.||Failing health compelled him to leave college.| |It was not long before he was very sorry that he had said what he had.||He soon repented his words.| As a rule, avoid making one passive depend directly upon another. |Gold was not allowed to be exported.||It was forbidden to export gold (The export of gold was prohibited).| |He has been proved to have been seen entering the building.||It has been proved that he was seen to enter the building.| In both the examples above, before correction, the word properly related to the second passive is made the subject of the first. A common fault is to use as the subject of a passive construction a noun which expresses the entire action, leaving to the verb no function beyond that of completing the sentence. |A survey of this region was made in 1900.||This region was surveyed in 1900.| |Mobilization of the army was rapidly carried out.||The army was rapidly mobilized.| |Confirmation of these reports cannot be obtained.||These reports cannot be confirmed.| Compare the sentence, "The export of gold was prohibited," in which the predicate "was prohibited" expresses something not implied in "export." 12. Put statements in positive form. Make definite assertions. Avoid tame, colorless, hesitating, non-committal language. Use the word not as a means of denial or in antithesis, never as a means of evasion. |He was not very often on time.||He usually came late.| |He did not think that studying Latin was much use.||He thought the study of Latin useless.| |The Taming of the Shrew is rather weak in spots. Shakespeare does not portray Katharine as a very admirable character, nor does Bianca remain long in memory as an important character in Shakespeare's works.||The women in The Taming of the Shrew are unattractive. Katharine is disagreeable, Bianca insignificant.| The last example, before correction, is indefinite as well as negative. The corrected version, consequently, is simply a guess at the writer's intention. All three examples show the weakness inherent in the word not. Consciously or unconsciously, the reader is dissatisfied with being told only what is not; he wishes to be told what is. Hence, as a rule, it is better to express a negative in positive form. |did not remember||forgot| |did not pay any attention to||ignored| |did not have much confidence in||distrusted| The antithesis of negative and positive is strong: |Not charity, but simple justice.| |Not that I loved Caesar less, but Rome the more.| Negative words other than not are usually strong: |The sun never sets upon the British flag.| 13. Omit needless words. Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell. Many expressions in common use violate this principle: |the question as to whether||whether (the question whether)| |there is no doubt but that||no doubt (doubtless)| |used for fuel purposes||used for fuel| |he is a man who||he| |in a hasty manner||hastily| |this is a subject which||this subject| |His story is a strange one.||His story is strange.| In especial the expression the fact that should be revised out of every sentence in which it occurs. |owing to the fact that||since (because)| |in spite of the fact that||though (although)| |call your attention to the fact that||remind you (notify you)| |I was unaware of the fact that||I was unaware that (did not know)| |the fact that he had not succeeded||his failure| |the fact that I had arrived||my arrival| See also under case, character, nature, system in Chapter V. Who is, which was, and the like are often superfluous. |His brother, who is a member of the same firm||His brother, a member of the same firm| |Trafalgar, which was Nelson's last battle||Trafalgar, Nelson's last battle| As positive statement is more concise than negative, and the active voice more concise than the passive, many of the examples given under Rules 11 and 12 illustrate this rule as well. A common violation of conciseness is the presentation of a single complex idea, step by step, in a series of sentences which might to advantage be combined into one. |Macbeth was very ambitious. This led him to wish to become king of Scotland. The witches told him that this wish of his would come true. The king of Scotland at this time was Duncan. Encouraged by his wife, Macbeth murdered Duncan. He was thus enabled to succeed Duncan as king. (55 words.)| |Encouraged by his wife, Macbeth achieved his ambition and realized the prediction of the witches by murdering Duncan and becoming king of Scotland in his place. (26 words.)| 14. Avoid a succession of loose sentences. This rule refers especially to loose sentences of a particular type, those consisting of two co-ordinate clauses, the second introduced by a conjunction or relative. Although single sentences of this type may be unexceptionable (see under Rule 4), a series soon becomes monotonous and tedious. An unskilful writer will sometimes construct a whole paragraph of sentences of this kind, using as connectives and, but, and less frequently, who, which, when, where, and while, these last in non-restrictive senses (see under Rule 3). |The third concert of the subscription series was given last evening, and a large audience was in attendance. Mr. Edward Appleton was the soloist, and the Boston Symphony Orchestra furnished the instrumental music. The former showed himself to be an artist of the first rank, while the latter proved itself fully deserving of its high reputation. The interest aroused by the series has been very gratifying to the Committee, and it is planned to give a similar series annually hereafter. The fourth concert will be given on Tuesday, May 10, when an equally attractive programme will be presented.| Apart from its triteness and emptiness, the paragraph above is bad because of the structure of its sentences, with their mechanical symmetry and sing-song. Contrast with them the sentences in the paragraphs quoted under Rule 10, or in any piece of good English prose, as the preface (Before the Curtain) to Vanity Fair. If the writer finds that he has written a series of sentences of the type described, he should recast enough of them to remove the monotony, replacing them by simple sentences, by sentences of two clauses joined by a semicolon, by periodic sentences of two clauses, by sentences, loose or periodic, of three clauses—whichever best represent the real relations of the thought. 15. Express co-ordinate ideas in similar form. This principle, that of parallel construction, requires that expressions of similar content and function should be outwardly similar. The likeness of form enables the reader to recognize more readily the likeness of content and function. Familiar instances from the Bible are the Ten Commandments, the Beatitudes, and the petitions of the Lord's Prayer. The unskilful writer often violates this principle, from a mistaken belief that he should constantly vary the form of his expressions. It is true that in repeating a statement in order to emphasize it he may have need to vary its form. For illustration, see the paragraph from Stevenson quoted under Rule 10. But apart from this, he should follow the principle of parallel construction. |Formerly, science was taught by the textbook method, while now the laboratory method is employed.||Formerly, science was taught by the textbook method; now it is taught by the laboratory method.| The left-hand version gives the impression that the writer is undecided or timid; he seems unable or afraid to choose one form of expression and hold to it. The right-hand version shows that the writer has at least made his choice and abided by it. By this principle, an article or a preposition applying to all the members of a series must either be used only before the first term or else be repeated before each term. |The French, the Italians, Spanish, and Portuguese||The French, the Italians, the Spanish, and the Portuguese| |In spring, summer, or in winter||In spring, summer, or winter (In spring, in summer, or in winter)| Correlative expressions (both, and; not, but; not only, but also; either, or; first, second, third; and the like) should be followed by the same grammatical construction. Many violations of this rule can be corrected by rearranging the sentence. |It was both a long ceremony and very tedious.||The ceremony was both long and tedious.| |A time not for words, but action||A time not for words, but for action| |Either you must grant his request or incur his ill will.||You must either grant his request or incur his ill will.| |My objections are, first, the injustice of the measure; second, that it is unconstitutional.||My objections are, first, that the measure is unjust; second, that it is unconstitutional.| See also the third example under Rule 12 and the last under Rule 13. It may be asked, what if a writer needs to express a very large number of similar ideas, say twenty? Must he write twenty consecutive sentences of the same pattern? On closer examination he will probably find that the difficulty is imaginary, that his twenty ideas can be classified in groups, and that he need apply the principle only within each group. Otherwise he had best avoid the difficulty by putting his statements in the form of a table. The position of the words in a sentence is the principal means of showing their relationship. The writer must therefore, so far as possible, bring together the words, and groups of words, that are related in thought, and keep apart those which are not so related. The subject of a sentence and the principal verb should not, as a rule, be separated by a phrase or clause that can be transferred to the beginning. |Wordsworth, in the fifth book of The Excursion, gives a minute description of this church.||In the fifth book of The Excursion, Wordsworth gives a minute description of this church.| |Cast iron, when treated in a Bessemer converter, is changed into steel.||By treatment in a Bessemer converter, cast iron is changed into steel.| The objection is that the interposed phrase or clause needlessly interrupts the natural order of the main clause. This objection, however, does not usually hold when the order is interrupted only by a relative clause or by an expression in apposition. Nor does it hold in periodic sentences in which the interruption is a deliberately used means of creating suspense (see examples under Rule 18). The relative pronoun should come, as a rule, immediately after its antecedent. |There was a look in his eye that boded mischief.||In his eye was a look that boded mischief.| |He wrote three articles about his adventures in Spain, which were published in Harper's Magazine.||He published in Harper's Magazine three articles about his adventures in Spain.| |This is a portrait of Benjamin Harrison, grandson of William Henry Harrison, who became President in 1889.||This is a portrait of Benjamin Harrison, grandson of William Henry Harrison. He became President in 1889.| If the antecedent consists of a group of words, the relative comes at the end of the group, unless this would cause ambiguity. |The Superintendent of the Chicago Division, who| |A proposal to amend the Sherman Act, which has been variously judged||A proposal, which has been variously judged, to amend the Sherman Act| |A proposal to amend the much-debated Sherman Act| |The grandson of William Henry Harrison, who||William Henry Harrison's grandson, Benjamin Harrison, who| A noun in apposition may come between antecedent and relative, because in such a combination no real ambiguity can arise. |The Duke of York, his brother, who was regarded with hostility by the Whigs| Modifiers should come, if possible next to the word they modify. If several expressions modify the same word, they should be so arranged that no wrong relation is suggested. |All the members were not present.||Not all the members were present.| |He only found two mistakes.||He found only two mistakes.| |Major R. E. Joyce will give a lecture on Tuesday evening in Bailey Hall, to which the public is invited, on "My Experiences in Mesopotamia" at eight P. M.||On Tuesday evening at eight P. M., Major R. E. Joyce will give in Bailey Hall a lecture on "My Experiences in Mesopotamia." The public is invited.| 17. In summaries, keep to one tense. In summarizing the action of a drama, the writer should always use the present tense. In summarizing a poem, story, or novel, he should preferably use the present, though he may use the past if he prefers. If the summary is in the present tense, antecedent action should be expressed by the perfect; if in the past, by the past perfect. |An unforeseen chance prevents Friar John from delivering Friar Lawrence's letter to Romeo. Juliet, meanwhile, owing to her father's arbitrary change of the day set for her wedding, has been compelled to drink the potion on Tuesday night, with the result that Balthasar informs Romeo of her supposed death before Friar Lawrence learns of the nondelivery of the letter.| But whichever tense be used in the summary, a past tense in indirect discourse or in indirect question remains unchanged. |The Legate inquires who struck the blow.| Apart from the exceptions noted, whichever tense the writer chooses, he should use throughout. Shifting from one tense to the other gives the appearance of uncertainty and irresolution (compare Rule 15). In presenting the statements or the thought of some one else, as in summarizing an essay or reporting a speech, the writer should avoid intercalating such expressions as "he said," "he stated," "the speaker added," "the speaker then went on to say," "the author also thinks," or the like. He should indicate clearly at the outset, once for all, that what follows is summary, and then waste no words in repeating the notification. In notebooks, in newspapers, in handbooks of literature, summaries of one kind or another may be indispensable, and for children in primary schools it is a useful exercise to retell a story in their own words. But in the criticism or interpretation of literature the writer should be careful to avoid dropping into summary. He may find it necessary to devote one or two sentences to indicating the subject, or the opening situation, of the work he is discussing; he may cite numerous details to illustrate its qualities. But he should aim to write an orderly discussion supported by evidence, not a summary with occasional comment. Similarly, if the scope of his discussion includes a number of works, he will as a rule do better not to take them up singly in chronological order, but to aim from the beginning at establishing general conclusions. 18. Place the emphatic words of a sentence at the end. The proper place for the word, or group of words, which the writer desires to make most prominent is usually the end of the sentence. |Humanity has hardly advanced in fortitude since that time, though it has advanced in many other ways.||Humanity, since that time, has advanced in many other ways, but it has hardly advanced in fortitude.| |This steel is principally used for making razors, because of its hardness.||Because of its hardness, this steel is principally used in making razors.| The word or group of words entitled to this position of prominence is usually the logical predicate, that is, the new element in the sentence, as it is in the second example. The effectiveness of the periodic sentence arises from the prominence which it gives to the main statement. |Four centuries ago, Christopher Columbus, one of the Italian mariners whom the decline of their own republics had put at the service of the world and of adventure, seeking for Spain a westward passage to the Indies as a set-off against the achievements of Portuguese discoverers, lighted on America.| |With these hopes and in this belief I would urge you, laying aside all hindrance, thrusting away all private aims, to devote yourselves unswervingly and unflinchingly to the vigorous and successful prosecution of this war.| The other prominent position in the sentence is the beginning. Any element in the sentence, other than the subject, becomes emphatic when placed first. |Deceit or treachery he could never forgive.| |So vast and rude, fretted by the action of nearly three thousand years, the fragments of this architecture may often seem, at first sight, like works of nature.| A subject coming first in its sentence may be emphatic, but hardly by its position alone. In the sentence, |Great kings worshipped at his shrine,| the emphasis upon kings arises largely from its meaning and from the context. To receive special emphasis, the subject of a sentence must take the position of the predicate. |Through the middle of the valley flowed a winding stream.| The principle that the proper place for what is to be made most prominent is the end applies equally to the words of a sentence, to the sentences of a paragraph, and to the paragraphs of a composition.
What effects do differentiated instruction and professional learning communities have on school culture? How does differentiated instruction reflect the attitudes and values of a school building? How can the development of a professional learning community and differentiated professional development benefit students? Identify two examples, one negative and one positive, in your school or workplace that affect school culture. Identify how a professional learning community could benefit your school or workplace culture in the context of these situations. What effects do differentiated instruction and professional learning communities have on school culture? DI can work well in failing communities but all members must subscribe to the theory and participate. Sometimes people don't care to have mandated ideology, especially if the tone of the environment has been different. But research will show that DI and PLC has effectiveness, even though despite research, not all theories work everywhere. Differentiated instruction, however, is an ideal philosophy because society is different now with more special education mainstreamed into classrooms, gifted, and bilingual children. The school of the past in the white suburban neighborhood still exists but in fewer and fewer places. How does differentiated instruction reflect the attitudes and values of a school building? -The question is posed with the hope of support and cheerleading for the cause. ... The future of education will depend on evaluating the value differentiated instruction and related methodologies.
Material Culture and Non-material Culture Culture and Non-Material Culture Material culture refers to the physical features that define a particular culture, society, or group, such as homes, schools, businesses, churches, nightlife, etc. These structures develop a perceptual schema of describing the members and overall atmosphere of a society. For example, Penticton, BC is known as “Penticton & Wine Country” because of the vast vineyards. On the other hand, non-material culture refers to the non-physical aspects (languages, symbols, norms, values) of a culture or society which serve to define the feelings, morals, or beliefs of the people in that group. Southern Alberta has a dominant Mormon population of approximately 10, 000 people. As a result, perceptions of people in those areas are conceived based on their religious background.
Print and Run! A lot of interesting things happen in the upper atmosphere of our world. Much of the high energy photons of the electromagnetic spectrum is filtered out by the time light gets to the surface of the earth: However, in the extreme upper atmosphere there are photons striking the atmosphere of such high energy that they initiate reactions of molecules or even change the nature of atoms themselves. Ultraviolet light is responsible for initiating chemical reactions through a process called photodissociation. Molecules are torn apart by the energy of the ultraviolet photon. Once the atoms are separated they can then come back together again; possibly, the atoms can form different combinations, thus allowing new molecules to be produced. Ozone is produced in this way, it is produced by the photodissociation of Oxygen. Oxygen is produced from the photodissociation of water. Some have judged that as much as 25% of the Oxygen in our world could come from reactions occurring in the upper atmosphere. If this large production of Oxygen in the upper atmosphere is a reality, then the reducing atmosphere postulated by evolutionists to allow for the generation of biological molecules, would be in jeopardy. It is interesting to note that the rocks in the precambrian contain metal oxides. The rocks are not found in a reduced state. Cosmic rays, which contain even higher levels of energy than ultraviolet light, cause some of the atoms in the upper atmosphere to fly apart into pieces. Neutrons that come from these fragmented molecules run into other molecules. When a neutron collides into a Nitrogen 14 atom, the Nitrogen 14 turns into Carbon 14 (A proton is also produced in the reaction as can be seen in the graphic to the left.). So in this reaction, a neutron is captured by the Nitrogen Atom and a proton is released. Thus in the Nitrogen Atom, a proton is effectively converted into a neutron, which allows a Carbon to be produced. Two other reactions (Oxygen 17 reacting with neutrons, and He 4 reacting with Carbon 13) both produce Carbon 14, but with much smaller yields. It has been estimated that about 21 pounds of Carbon 14 is produced every year in the upper atmosphere. So in addition to Carbon 12 and Carbon 13, which are both naturally occurring, Carbon 14 is also naturally occurring in our world. However, unlike both Carbon 12 and 13, Carbon 14 is unstable. The only reason why Carbon 14 continues to be found on Earth is because of its continued production in the upper atmosphere. If Carbon 14 is being produced in the upper atmosphere by cosmic ray bombardment at a constant rate, then carbon 14 must be accumulating in the world. Well, that would be the case if Carbon 14 wasn't unstable and degrading just as fast. It turns out that the production and degradation of Carbon 14 is going on at the same rate. The two reactions are at equilibium or nearly at equilibrium. This Carbon 14/Nitrogen 14 equilibrium does not only exist in the upper atmosphere where Carbon 14 is produced. Winds cause the Carbon 14 to be carried throughout the world. In addition most of the Carbon 14 reacts with Oxygen to produce atmospheric CO2. Because CO2 gets incorporated into trees and plants, the plants also possess the same levels of Carbon 14 as in the atmosphere. The food that we eat is also contaminated with the same level of Carbon 14. So essentially the whole Biosphere contains Carbon 14 at the same equilibrium concentration. This equilibrium is true for most of the Biosphere except for marine environments. More will be said on this later. Any animal or plant will contain the Biosphere level of Carbon 14. We for example ingest food containing Carbon 14 and we also defecate wastes containing Carbon 14. In addition Carbon 14 is also reconverting back into Nitrogen 14 in our bodies. Only when one dies is this process disrupted. At death there is no further ingestion of Carbon 14, so the Carbon 14 concentration will slowly decrease as individual Carbon 14 atoms degrade back into Nitrogen 14 atoms. If it can be assumed that the concentration of Carbon 14 has always been at equilibrium at the same level as it is today, or we are able to produce radiocarbon calibration curves which would determine fluctuations in the C14 Concentration through time; then, we can use this assumption to determine how long ago a specimen was separated from the dynamic Biosphere. (We will simplify the problem by not using any of the calibration curves. So for the sake of our discussion, we will assume that C14 concentration in the atmosphere has always been the same through time.) Any animal or plant, continually exchanges organic molecules (Carbon containing molecules) with the environment. So all living organisms will contain the Biosphere level of Carbon 14. However, once an organism dies, and is somehow buried, the exchange of Carbon stops. As a consequence, the level of Carbon 14 in the buried carcass decreases according to the rate at which Carbon 14 degrades into Nitrogen 14 within the body. When Scientists uncover fossils and other artifacts that contain Carbon, they can determine how long that sample was buried by determining the amount of Carbon 14 that has been lost since it was buried in the ground. They know the level of Carbon 14 in the Biosphere (assuming it hasn't changed), and they can measure the level of Carbon 14 in the specimen so what they do is determine the difference. That difference represents the loss in Carbon 14 that the specimen experienced while it was in the ground. Now all the scientist has to do is determine how many half-lives the loss represents. The number of half-lives will then give a number showing how long the sample was isolated from the biosphere. Below is a graph showing the loss that four different specimens would experience before being recovered for measurement. Looking at the chart above, Sample D has 1/2 the radioactive Carbon 14 that was expected if that sample was part of the Biosphere. 1/2 the normal level of Carbon 14 indicates that Sample D has been buried for one half-life or 5730 years. Sample C has 1/4 the radioactive Carbon 14 which indicates that it has been buried for two half-lives or 11460 years. Sample B has 1/8 the radioactive Carbon 14 indicating that it was buried for three half-lives or 17190 years. Finally, Sample A has 1/128 the radioactive Carbon 14 indicating that it was buried for seven half-lives or 40110 years. In real life there are fluctuations in the Biosphere Carbon 14 levels through time that must be accounted for in the calculation. Also all Carbon 14 dates must be in reference to the total amount of Carbon (Carbon 12) found in the sample. The normal ratio of Carbon 14 to Carbon 12 as found in our present Biosphere is: 1 to 848,000,000,000. The radiation is actually quite small. There are only 13.6 disintegrations per gram of Carbon per min. Any loss of Carbon 14 would result in much smaller ratios and disintegrations of Carbon 14 atoms. One gram of carbon that originally averaged 13.6 disintegrations per minute, would average only 0.00166 disintegrations per minute after 13 half-lives (75,000 years). That is the same as 0.0996 disintegrations per hour or 2.39 disintegrations per day! It is amazing how such small levels of radiation can be detected. But with the nuclear accelerator mass spectrometry technique which directly counts C14 atoms, it is still possible to detect samples that have undergone as many as 13 half-lives (75,000 years) of Carbon 14 degradation. Copyright © 1998 - 2017 by Michael Brown all rights reserved Officially posted September 25, 1998 last revised January 4, 2017
Editor's note: Climate change can be seen spreading across the landscape. In some cases it's visible through the retreat of glaciers. In others it's reflected in the warming waters in Rocky Mountain streams at the height of summer, or in the intensity of storms that rake the landscape. The National Parks Conservation Association, in a just-released 60-page report, looks at how climate change might impact wildlife in the national parks, and suggests actions that can be taken to mitigate those impacts. Over the coming days we'll share this report with you. This, the first installment, looks at five steps that can be taken to help wildlife in the parks cope with climate-change impacts. The entire report can be found at this page. The effects of climate change have been visible for years in our national parks. Glaciers are disappearing faster than scientists had predicted even a few years ago. Native trees and animals are losing ground because changing temperature and weather patterns are making the availability of food, water and shelter less certain. Fish and wildlife are being driven from their national park homes by changes that are unfolding faster than the animals’ ability to adapt. Climate change is here and now, affecting the coral reefs in Florida at Biscayne National Park, lodgepole pines in Rocky Mountain National Park and animals that rely on snow in Yellowstone National Park. The danger signs are a clear call to action for the National Parks Conservation Association, a nonprofit citizens’ organization that works to enhance and restore America’s national parks for present and future generations. What’s happening in the parks is symptomatic of changes unfolding across the larger landscapes to which they are inseparably connected, the same landscapes that contain our communities. Changes that harm wildlife — depriving them of food, water, or shelter — will ultimately harm us. Given the iconic importance of parks, and that they protect core ecoregions of this country, working to safeguard parks and their wildlife from climate change should be a central strategy in safeguarding our nation from climate change. Solutions are neither simple nor quick and easy. It will take decisive action on the part of our federal government and all of us to meet the challenge and keep our faith with future generations. To avoid the potentially catastrophic loss of animal and plant life, it is imperative that we wean ourselves from energy sources like coal and oil that are accelerating rising temperatures and causing unnatural climate change. And it is equally imperative that we pursue new strategies to preserve functioning ecosystems and the full diversity of life they support. America’s national parks are showing the signs of climate change. From Yosemite’s forests in California to the Gulf Stream waters of the Florida coast, from the top of the Rocky Mountains to the shores of the Chesapeake Bay, these lands and the incredible diversity of life they support are all feeling the heat. The choice is now ours to either chronicle their decline or take actions to make our national parks part of the climate change solution. If we fail to act, many species of fish and wildlife could disappear from the parks — or even become extinct. That we must reduce global warming pollution to protect our natural world and human communities is now understood by many. But that is not all we must do. Unnatural climate change is already underway and will continue for decades even if we put a stop to all global warming pollution today. Additional steps must be taken now to safeguard wildlife. We must protect the places that will help wildlife survive as the climate changes, manage wildlife anticipating the changes ahead, and improve the ecological health of the national parks and their surrounding landscapes to give fish and wildlife a fighting chance to survive unnatural climate change. National Parks Conservation Association advocates five steps that, taken together, will help safeguard fish and wildlife, their homes, and our communities, from climate change. Here’s what needs to be done: #1: Stop contributing to climate change Many wildlife species are struggling to cope with climate changes already underway. Some will not be able to endure much more change, and could disappear from national parks and even go extinct if climate change is unchecked. We must limit its effects by rapidly reducing greenhouse gas emissions and switching to less-polluting sources of energy. ■ Coral reefs protected by Biscayne and Virgin Islands national parks might not survive if we fail to reduce carbon dioxide pollution that is warming and acidifying the ocean. ■ Salmon might disappear from Olympic, North Cascades, and Mount Rainier national parks if climate change continues to alter stream flows, increase water temperatures, and create extreme downpours that wipe out young salmon. ■ Grizzly bears, birds, fish, and other animals in Yellowstone and Rocky Mountain national parks could decline if the lodgepole and whitebark pine forests that sustain them continue to be wiped out by the advance of bark beetles, drought, and other climate change-related forces. #2: Reduce and eliminate existing harms that make wildlife more vulnerable to climate change The damaging effects of climate change are compounded by existing stresses on wildlife. Air and water pollution, development of adjacent wild lands, logging and mining, and other forces are harming national park wildlife now, and adding climate change to the mix could be disastrous. By reducing and eliminating these environmental harms we can significantly decrease the vulnerability of plants, fish, and wildlife to climate change as well as produce rapid and tangible benefits — such as clean air and water — that both people and wildlife need to thrive. ■ Water pollution and non-native species are already stressing waterfowl, shorebirds, and migratory birds that visit Sleeping Bear Dunes National Lakeshore and other national parks in the Great Lakes region. By cleaning up water pollution and combating invasive species, we can give birds that depend on the Great Lakes a better chance to survive climate-related changes. ■ Historic overharvesting, disease, and pollution have caused a massive decline in Chesapeake Bay oysters. A more aggressive approach to reducing these threats would help the bay’s oysters survive climate change stresses such as warmer waters and heavier floods that flush pollution in to the Bay and introduce more fresh water than the oysters can tolerate. ■ Pesticides, disease, and non-native trout have nearly eliminated the mountain yellow-legged frog from Yosemite, Sequoia, and Kings Canyon national parks. Reducing these threats and restoring healthy populations of frogs throughout the parks could help them survive the loss of shallow ponds and streams expected to occur in some areas as the climate continues to warm. #3: Give wildlife freedom to roam Climate change will cause some wildlife to move outside the parks’ protected boundaries, while other species may move in. Because national parks, like all protected areas, are interconnected with surrounding landscapes, cooperation and coordination among all land owners — public and private — is essential to preserve functioning ecosystems and the wildlife they support. National parks can play a key role in conserving wildlife across the landscape. In some cases they provide natural corridors; in other cases new corridors will be needed to connect parks and other protected lands so that wildlife can move in response to climate change. ■ Thanks to the efforts of the National Park Service, there is an unbroken, 2,175-mile corridor of protection, the Appalachian National Scenic Trail. Stretching from Georgia, north through Great Smoky Mountains and Shenandoah national parks, to Maine, the trail and its network of parks stands ready to serve as a corridor and refuge for species that need to move in response to climate change. ■ Desert bighorn sheep that frequent Arches, Canyonlands, and Capitol Reef national parks shift location in response to seasons and weather. As climate change alters precipitation and vegetation patterns, new migration patterns could emerge. Working together, wildlife managers and private landowners can ensure pathways are available for bighorn sheep to access food and water they need to thrive. ■ The caribou that live in and pass through Alaska’s high arctic parks — Noatak and Bering Land Bridge national preserves, Kobuk Valley National Park, and Gates of the Arctic National Park & Preserve — also roam across a landscape with a patchwork of federal, state, and tribal owners. As climate change renders traditional calving grounds and winter feeding areas unsuitable, wildlife managers working together can identify new habitat and ensure the path is clear for caribou to get there. #4: Adopt “climate smart” management practices “Climate smart” management includes four key elements: (1) training national park managers to build climate change into their work, (2) establishing guidance and policies that enable park staff to work closely and equally with other federal, state, local and private landowners, (3) providing sufficient funding and staffing for the challenge at hand, and (4) creating a political and organizational setting that facilitates appropriate, timely, and collaborative action. While research and monitoring should be a part of any park’s approach to “climate smart” management, real focus needs to be placed on implementing management changes now based on what we already know. ■ For wolverines in Yellowstone and Glacier national parks, the loss of deep winter snows could mean fewer winter-killed animals that are essential to their diet. A healthy wolf population creates ample carrion. Further research could confirm that maintaining a healthy wolf population is a “climate smart” strategy for helping wolverines survive as winter snows decline. ■ Nestled between its larger neighbors in the Sierra Nevada Mountains — Yosemite and Sequoia — Devils Postpile National Monument is home to a great diversity of wildlife. But at only 800 acres, the park cannot by itself meaningfully address climate change impacts on its wildlife. So the park superintendent is developing a plan in coordination with managers of the surrounding national forest to protect wildlife throughout the larger ecosystem. ■ Northeast coastal parks like Acadia National Park and Fire Island National Seashore provide critical nesting and feeding areas along the Atlantic migratory flyway. Sea level rise threatens to swamp some bird habitat along the flyway. Working together, resource managers from the Park Service and other federal, state, and local agencies can identify and protect critical habitat, restore marshes, and take steps that allow coastal habitats the opportunity to shift inland. #5: National parks lead by example With more than 270 million annual visitors, a core education mission, and a tradition of scientific leadership, national parks have an unparalleled ability to engage Americans in the fight against climate change. National parks can help visitors understand climate change already occurring, the vulnerabilities of tomorrow, and how we can all reduce our contribution to global warming. National parks can also serve as natural laboratories for testing innovative ways to safeguard wildlife from the effects of climate change, and to reduce greenhouse gases that are causing climate change. ■ Throughout the country, national parks such as Everglades, the Smokies, Glacier, and Yosemite, have banded together as Climate Friendly Parks. They share common goals of reducing their own greenhouse gas emissions and demonstrate sustainable solutions to others. NPCA operates Do Your Part!, a program that carries the parks’ sustainability message to the general public and provides individuals with opportunities to do their part to reduce global warming pollution. ■ The National Park Service is beginning to experiment with scenario planning, a model that identifies future scenarios that could occur with increasing climate change and explores management responses for each. The model will help managers develop action and monitoring plans that give them the information and flexibility they need to maximize the chance not of the single “best” outcome — a risky approach when uncertainty is high — but the chance of some positive outcome. Tomorrow: Coral Reefs of Southern Florida and the Caribbean Jennie Hoffman, PhD, Senior Scientist, Climate Adaptation, EcoAdapt Eric Mielbrecht, MS, Senior Scientist and Director of Operations, EcoAdapt Lara Hansen, PhD, Chief Scientist and Executive Director, EcoAdapt ADDITIONAL PHOTO CAPTIONS: Cover: Brown bear and seagulls in Katmai National Park, Alaska © David Tipling/Getty
(PhysOrg.com) -- Few areas of science are more controversial than cold fusion, the hypothetical near-room-temperature reaction in which two smaller nuclei join together to form a single larger nucleus while releasing large amounts of energy. In the 1980s, Stanley Pons and Martin Fleishmann claimed to have demonstrated cold fusion - which could potentially provide the world with a cheap, clean energy source - but their experiment could not be reproduced. Since then, all other claims of cold fusion have been illegitimate, and studies have shown that cold fusion is theoretically implausible, causing mainstream science to become highly speculative of the field in general. Despite the intense skepticism, a small community of scientists is still investigating near-room-temperature fusion reactions. The latest news occurred last week, when Italian scientists Andrea Rossi and Sergio Focardi of the University of Bologna announced that they developed a cold fusion device capable of producing 12,400 W of heat power with an input of just 400 W. Last Friday, the scientists held a private invitation press conference in Bologna, attended by about 50 people, where they demonstrated what they claim is a nickel-hydrogen fusion reactor. Further, the scientists say that the reactor is well beyond the research phase; they plan to start shipping commercial devices within the next three months and start mass production by the end of 2011. Rossi and Focardi say that, when the atomic nuclei of nickel and hydrogen are fused in their reactor, the reaction produces copper and a large amount of energy. The reactor uses less than 1 gram of hydrogen and starts with about 1,000 W of electricity, which is reduced to 400 W after a few minutes. Every minute, the reaction can convert 292 grams of 20°C water into dry steam at about 101°C. Since raising the temperature of water by 80°C and converting it to steam requires about 12,400 W of power, the experiment provides a power gain of 12,400/400 = 31. As for costs, the scientists estimate that electricity can be generated at a cost of less than 1 cent/kWh, which is significantly less than coal or natural gas plants. The magnitude of this result suggests that there is a viable energy technology that uses commonly available materials, that does not produce carbon dioxide, and that does not produce radioactive waste and will be economical to build, according to this description of the demonstration. Rossi and Focardi explain that the reaction produces radiation, providing evidence that the reaction is indeed a nuclear reaction and does not work by some other method. They note that no radiation escapes due to lead shielding, and no radioactivity is left in the cell after it is turned off, so there is no nuclear waste. The scientists explain that the reactor is turned on simply by flipping a switch and it can be operated by following a set of instructions. Commercial devices would produce 8 units of output per unit of input in order to ensure safe and reliable conditions, even though higher output is possible, as demonstrated. Several devices can be combined in series and parallel arrays to reach higher powers, and the scientists are currently manufacturing a 1 MW plant made with 125 modules. Although the reactors can be self-sustaining so that the input can be turned off, the scientists say that the reactors work better with a constant input. The reactors need to be refueled every 6 months, which the scientists say is done by their dealers. The scientists also say that one reactor has been running continuously for two years, providing heat for a factory. They provide little detail about this case. Rossi and Focardis paper on the nuclear reactor has been rejected by peer-reviewed journals, but the scientists arent discouraged. They published their paper in the Journal of Nuclear Physics, an online journal founded and run by themselves, which is obviously cause for a great deal of skepticism. They say their paper was rejected because they lack a theory for how the reaction works. According to a press release in Google translate, the scientists say they cannot explain how the cold fusion is triggered, but the presence of copper and the release of energy are witnesses. The fact that Rossi and Focardi chose to reveal the reactor at a press conference, and the fact that their paper lacks details on how the reactor works, has made many people uncomfortable. The demonstration has not been widely covered by the general media. However, last Saturday, the day after the demonstration, the scientists answered questions in an online forum, which has generated a few blog posts. One comment in the forum contained a message from Steven E. Jones, a contemporary of Pons and Fleishmann, who wrote, Where are the quantitative descriptions of these copper radioisotopes? What detectors were used? Have the results been replicated by independent researchers? Pardon my skepticism as I await real data. Steven B. Krivit, publisher of the New Energy Times, noted that Rossi and Focardis reactor seems similar to a nickel-hydrogen low-energy nuclear reaction (LENR) device originally developed by Francesco Piantelli of Siena, Italy, who was not involved with the current demonstration. In a comment, Rossi denied that his reactor is similar to Piantellis, writing that The proof is that I am making operating reactors, he is not. Krivit also noted that Rossi has been accused of a few crimes, including tax fraud and illegally importing gold, which are unrelated to his research. Rossi and Focardi have applied for a patent that has been partially rejected in a preliminary report. According to the report, As the invention seems, at least at first, to offend against the generally accepted laws of physics and established theories, the disclosure should be detailed enough to prove to a skilled person conversant with mainstream science and technology that the invention is indeed feasible. In the present case, the invention does not provide experimental evidence (nor any firm theoretical basis) which would enable the skilled person to assess the viability of the invention. The description is essentially based on general statement and speculations which are not apt to provide a clear and exhaustive technical teaching. The report also noted that not all of the patent claims were novel. Giuseppe Levi, a nuclear physicist from INFN (Italian National Institute of Nuclear Physics), helped organize last Fridays demonstration in Bologna. Levi confirmed that the reactor produced about 12 kW and noted that the energy was not of chemical origin since there was no measurable hydrogen consumption. Levi and other scientists plan to produce a technical report on the design and execution of their evaluation of the reactor. Also at the demonstration was a representative of Defkalion Energy, based in Athens, who said that the company was interested in a 20 kW unit and that within two months they would make a public announcement. For the Rossi and Focardi, this kind of interest is the most important. We have passed already the phase to convince somebody, Rossi wrote in his forum. We are arrived to a product that is ready for the market. Our judge is the market. In this field the phase of the competition in the field of theories, hypothesis, conjectures etc etc is over. The competition is in the market. If somebody has a valid technology, he has not to convince people by chattering, he has to make a reactor that work and go to sell it, as we are doing. He directed commercial inquiries to info(at)leonardocorp1996.com . Explore further: Uncovering the forbidden side of molecules via: Pure Energy Systems
Esophagus is the organ that connects your throat with the stomach. It consists of thin long tube measuring about 10 inches that carries whatever food you eat to the stomach for digestion. Esophageal cancer occurs when the soft delicate lining of the esophagus gets affected and it can develop on any part of the esophagus. Esophageal cancer is of 2 types caused by squamous (affecting the upper part) cell carcinomas and adenocarcinomas (affecting the lower part of esophagus). Men are more affected than women. This type of cancer develops in 5 stages ranging from mild to most severe. Esophagus consists of the following layers of muscular tube :- Inner lining of mucosa is very moist so that food can easily pass to the stomach. Next comes the submucosa and the function of this is to secrete mucus to keep the inner layer moist. Next one is the muscle layer that helps in pushing the food down and the last is the outer layer that covers the esophagus superficially. Cell is the fundamental unit of your body which produces tissues and other organs. Normally cells grow old and die following which new cells are formed. But for cancer patients the reverse happens. Old cells will not die and more and more new cells are formed leading to building up of extra cells which takes form of a tumor. Cell growth can be malignant (cancer causing) or benign (non cancerous). Benign growth of cells is not of importance for life and they do not disturb the tissues that surround them. They will not spread to other organs also, whereas malignant growth of cells can affect and damage the neighboring tissues and spread to other parts of your body. Esophageal cancer begins initially with difficulty in swallowing food. You may have problem in chewing and swallowing some food items. It can cause chest pain or burning sensation in your chest (can be due to other reasons also). There would be loss of weight, tiredness and choking (while eating) and frequent indigestion problems. It can cause irritant cough when food particles gets stuck inside the esophagus. Some people may feel pain while swallowing and change in voice which does not go away even after taking medications. For many people, esophageal cancer does not show any early symptom. Risk Factors : Still doctors cannot explain why particular person develops this cancer and others do not. Some of the risk factors are being male, being aged (above 65 years), smoking and drinking regularly, being overweight and having acid reflux problem. Prolonged acid reflux problem can cause Barrett esophagus leading to adenocarcinoma. Research indicates that diet less in vegetables and fruits can make you prone to esophageal cancer. Two types of esophageal cancer are: Adenocarcinoma which affects the lower portion of the esophagus which is the common type of cancer and Squamous carcinoma which affects the thin lining of esophagus affecting the middle portion. In rare cases, esophageal cancer can cause bleeding and severe weight loss. The doctor will examine the symptoms and order for endoscopy. In this procedure, a thin tube is passed through the throat to examine the inner portion of the esophagus through the lens. Barium liquid is given to you to drink and series of X-rays are taken immediately. The resulting X-ray will underline the extra growth of the cells inside. For some cases, a biopsy is done by removing small tissue sample from the esophagus. Stages of Esophageal Cancer : - It is very helpful for the doctor to decide on the treatment options once he knows about the right stage of your cancer. - First stage of cancer affects only the topmost layer of the lining of esophagus. - Second stage of cancer invades deep into the esophagus lining spreading to the lymph nodes also. - Third stage of esophageal cancer will spread to the deepest part of the layer affecting tissues. - Fourth or final stage of cancer is widespread and usually affects other body parts also. Treatment depends on the age, health condition and how far the cancer cells have spread. Number of methods like chemotherapy, radiation therapy and surgery is available. Normally a team of doctors like gastroenterologist, thoracic surgeon, and oncologist and radiation therapists. You may need from the dietician also if you have problems in swallowing. You can always get a second opinion from other doctor before commencing the treatment. Usually a combination of treatment along with surgery is recommended for treating esophageal cancer. In case the cancer cells are very small (affecting only the superficial layer) your surgeon will remove the cancer cells alone using endoscopic procedure. If the cancer is in the second stage (spreading till the lymph nodes) your surgeon will consider removing small portion of your esophagus. He would use a small portion of the colon for replacing the missing portion of your esophagus. In case if the cancer cells have advanced esophagogastrectomy is to be done for removing most of the upper part of the esophagus. The stomach region will be pulled up to attach with the balance portion of the esophagus. Any of the surgery mentioned above carries risk of getting infected or leakage. And depending on the intensity of the disease it can be successful or not. For minor forms of esophageal cancer the tube is widened to place a metal stent inside the esophagus. For some people a small feeding tube is inserted into the stomach directly for getting nutrition. Chemotherapy is given before or after surgery for treating esophageal cancer. Suitable drugs are given orally for destroying cancer causing cells. These drugs can also attack healthy cells causing infections and bleeding. It may also cause adverse effects like joint pain, rash, tingling sensation in the hands and feet. Radiation Therapy : Radiation is given for killing cancer cells either externally or internally. The surgeon would apply local anesthetic for numbing your throat and puts in a thin tube through which he sends radioactive rays. In many cases combination of chemotherapy and radiation gives the desired effect. Clinical trials have become the recent addition for treating esophageal cancer. Lifestyle Changes : You need to cope with variety of problems like difficulty in swallowing and loss of weight due to cancer. Doctors would suggest you to be on liquid diet or with tube feeding until the area are completely healed. Eat foods that are soft and easy to swallow and eat small frequent meals rather sticking on to two larger meals. Include vitamin supplements in your diet with consultation of your doctor. Variety of alternate treatments like acupuncture, hypnosis, yoga and relaxation techniques are available for getting relief from pain. Living with Cancer : There would be strong feelings of sadness and shock when you are diagnosed with cancer. Learn fully about the stage of cancer from your doctor. Stay connected with your family and friends. Join a support group to share your feelings and for getting encouraged that you are not alone. You can reduce the risk of developing esophageal cancer by quitting smoking and quitting drinking. Include plenty of liquids, vegetables and fruits in your diet. Do regular exercise and stay on healthy weight.
Middle ear effusion Middle ear effusion is common in children aged 1-6. Glue ear refers to the gluey consistency of the fluid that remains in the middle ear. The fluid can be gluey or straw-coloured yellowy fluid. Middle ear effusion can occur following episodes of acute otitis media or without acute inflammation due to other causes such as environmental factors, allergies, eustachian tube dysfunction, other anatomical obstructive causes. Middle ear effusion causes impediment of sound wave transmission from the eardrum through the middle ear to the inner ear. Therefore the most common presentation is hearing loss associated with some ear pain, difficulty sleeping because of fullness in the ear, balance disturbance, irritability and occasionally fever. Middle ear effusion can get infected and during these episodes, children may have high fevers and severe pain associated with hearing loss. The primary treatment for middle ear effusions is a strong analgesic(pain killer) and occasionally antibiotics. Antibiotics do not routinely clear middle ear effusions unless they are infected. Middle ear effusions are diagnosed clinically by your general practitioner or by your ENT surgeon. This is by visualising the eardrum which shows dullness lack of mobility and sometimes bulging. There may be changes to the eardrum with some scar tissue which is known as tympanosclerosis due to having had the fluid for some time. Another way of diagnosing middle ear effusion is by performing a Tympanometry or audiogram. Tympanometry is a very good tool to diagnose middle ear effusion as does not require a response from the child and is quick. Middle ear effusions persisting for longer than 3 months require an ear nose and throat referral and often requires ventilation tube (grommets). Middle ear effusions do not normally respond to decongestant therapy nasal sprays and antihistamines. In children over 4 years of age having persistent middle ear effusions may also benefit from adenoidectomy as it has a beneficial effect in reducing recurrence of middle ear effusions. There are some benefits in resolving the effusions by keeping children away from daycare, reducing allergy and inflammation within the nose and nasopharyngeal cavity which may affect eustachian tube, and children performing eustachian tube exercises with various ear-popping demises devices such as Otovent and ear popper. Ventilation tubes known as Grommets are small PTFE blastic tubes of various sizes. They are inserted into the eardrum so it can drain the middle ear fluid at as another ventilation hole for the middle ear as the eustachian tube is not functioning well in these children. The tube acts as a drainage pathway as well as provides a port to equalise middle ear an outside pressure. The various types of grommets are shown below. Some grommets are 6mm in size and some up to 15mm. The actual hole varies from about 2mm to 10mm in size. Depending on the overall size of grommet and dice size of the side phalanges some grommets stay in for up to 9-12 months and the bigger ones tend to stay in for up to 5 years. Grommets are inserted in children under a general anaesthetic. The general anaesthetic procedure is very short, approximately 15 minutes, and is performed by holding a mask ventilation on the child’s face to breathe oxygen and some anaesthetic agents. Children spontaneously breathe while the surgeon performs the grommet insertion surgery using a microscope making a very tiny perforation on the eardrum to release all of the fluid and suck all fluid out from the middle ear. The grommet is then placed in the middle of the eardrum. The procedure is a day stay procedure usually done in the morning and you will be in and out of the hospital within 3 hours. The child is normally back to normal and able to attend daycare or normal activities the next day. There may be some seepage of fluid over the next day or so if there was a significant amount of middle ear effusion at the time of surgery. The surgeon may provide you with some topical antibiotics drops to be placed in the ear canal to keep the grommet passage open and to wash away any of the seepage fluid. These antibiotic drops should only be used for a maximum of 5 days.
In one of our previous blogs, we learned how to insert symbols and special characters in excel using an inbuilt ‘Insert’ ribbon tab > ‘Symbols’ function. Surprisingly, excel has also come up with a formula to characters in excel using character code. The function name is the CHAR formula in excel. Using the Excel CHAR function, you can use the character number to insert a symbol in excel. - When To Use CHAR Function in Excel - Syntax and Argument - CHAR Function – Examples in Excel - List of All ANSI Character Codes for CHAR Excel Function - Do Not Miss These Points Let us begin now 😎 When To Use CHAR Function in Excel CHAR formula is used to insert a particular character using the character code as an input argument. This function belongs to the Text function group, therefore the result of this function is a text formatted value. The function returns the character based on the code number from the character set of your computer. Different operating systems work differently in this regard. - Windows OS uses the ANSI character set - Macintosh OS uses Macintosh character set In this tutorial, we would cover the Windows OS ANSI character set for understanding. Syntax and Argument The argument of the CHAR function is explained as below: - number – In this argument, specify the number (ranging between 1 to 255) of which you want to return the character. As mentioned in the above section, this function will return a different result in different operating systems. CHAR Function – Examples in Excel Finally, let us now learn how to use the CHAR function in excel with the help of examples. Ex. 1 – Insert Alphabet in Excel Cell Using ANSI Character Number You can use the character number code and insert letters in the excel cell (both capital letters and small). - For capital letters (A to Z), use the ANSI code numbers from 65 to 90. - For small letters (a to z), use the ANSI code numbers from 97 to 122. To insert the capital ‘M’ letter in a cell using the ANSI Code, simply use the following formula. Similarly, to enter a non-capital ‘m’ letter in excel cell, use this formula: Ex. 2 – Insert Numbers in Excel Cell Using ANSI Code Similarly, use the ANSI code values from 48 to 57 (in sequence) to insert numbers between 0 and 9 in an excel cell using the CHAR formula. Therefore, to return the number 5 using ANSI code, use the following formula: Ex. 3 – Insert Copyright, Trade Mark (TM), Registered symbol in Excel Cell In addition to the alphabets and numbers, the CHAR symbol is useful to insert copyright (inside a circle), trademark, and registered (inside a circle) sign in Excel cell. The following image shows the CHAR codes for the ©, ®, and ™ symbols in excel. See the below example formula: As a result, excel returns the three symbols as shown in the image below: Use CONCATENATE, CONCAT, TEXTJOIN, or the ampersand (&) symbol to combine a text and above symbols, like this: As a result, excel would return the output as – ExcelUnlocked® in a cell. The highly used practical application of the CHAR function is for inserting line breaks in a cell. I have already explained how to insert line breaks in an excel cell in a separate tutorial here. Also, check this link to learn about how to insert degree symbol in excel using different methods (including CHAR formula code). List of All ANSI Character Codes for CHAR Excel Function Alanwood has provided an entire list of 255 ANSI characters on his website. Do Not Miss These Points - The CHAR function returns #VALUE! error code if you specify the number argument other than 1 to 255. - The CHAR function is available since Excel 2000 version. This brings us to the end of the tutorial on CHAR excel formula. Thank You 🙂
Multiplication Tables 2-10 Missing Factor Worksheet-There are a lot of multiplication tables because they are used to calculate things like addition, subtraction, multiplication and division. However, sometimes it can be hard to tell which factor is being used when there are only two numbers. That’s where the missing factor worksheet comes in. This sheet will help you figure out which of the two numbers is being multiplied by a certain factor. - Multiplication Tables 2-10 Mixed Practice Worksheet - Multiplication Tables 2-12 Mixed Practice Worksheet Multiplication Tables 2-10 Missing Factor Worksheet Using a worksheet like the one provided can help students practice their multiplication skills and improve their speed and accuracy in solving multiplication problems. It can also help students become more familiar with the multiplication tables from 2 to 10, which is an important foundation for more advanced math concepts. Working on a multiplication worksheet can also help students develop their problem-solving skills and improve their attention to detail. It requires students to carefully read and understand the problem, identify the missing factor, and use their knowledge of the multiplication tables to find the correct answer. Overall, practising multiplication with a worksheet can help students develop a strong foundation in math and improve their overall math skills. It can also be a fun and engaging way for students to learn and practice their multiplication facts. Printable Multiplication Tables 2 to 10 with Missing Factor Worksheet Here is a printable multiplication tables worksheet with missing factors that you can use to help students build good confidence in their multiplication skills. This worksheet covers the multiplication tables from 2 to 10 and has 20 problems, with one factor missing in each problem. In order to help students with multiplication tables, there are printable multiplication tables that need for what. This can be a helpful tool for students who have trouble multiplying the numbers in the table by themselves. You can use this worksheet to build good confidence in students’ multiplication skills, you can have them work on it in a quiet and focused setting. Encourage them to take their time and carefully read and understand each problem before attempting to solve it. You can also have them check their work after completing each problem to make sure they have found the correct answer. As students become more confident and accurate in their multiplication skills, you can increase the difficulty of the problems or have them work on more advanced multiplication worksheets. Encourage them to continue practising and learning new math concepts to further develop their skills and confidence. 2 to 10 Multiplication Tables Worksheet with Missing Factor PDF When working with multiplication tables, it is important to include the missing factor. If you do not have the missing factor listed on your multiplication table, it is best practice to add it before performing the operation. This will ensure that the operation will be correct and accurate. Missing factor worksheets for the 2 to 10 multiplication table is important because they allow students to practice and reinforce their understanding of the multiplication tables. This Multiplication Tables 2-10 Missing Factor Worksheet with Answer Key is to help students check their progress reports. The worksheet has the student’s name and the correct answer for each row and column. The purpose of having a Free Missing Factor Worksheet for Multiplication Tables 2-10 is to help users prepare multiplication tables properly. By using a missing factor worksheet, users can avoid common mistakes and improve their accuracy.
02 Mar, 2023 11 : 26 In learning, parents usually don’t know how to help their children to learn through what they are interested in to a wider field. YWIES teachers have our own effective methods, in which a series of courses are provided oriented from children’s interests. The learning is expanded to language, society, art, mathematics, science and other aspects. Let’s follow our teachers, Echo and Sophia’s Emergent Curriculum, to see how they help children expand their knowledge in today’s article. Starting with children’s interest One morning, when the children were having breakfast, they found a spider weaving a web near the windowsill. Everyone thought that the spider might be hungry and wanted to eat our breakfast. The following week, the Spider worked hard outside the class window every day. Some students warmly called him our "Spider Friend"! The word "friend" does not seem to resonate with all children. Some of them said spiders look ugly and disgusting. Others said spiders know how to weave webs to catch insects and that they can't be pets. As the children's discussion became more and more intense, we decided to start a new journey, to understand spiders better, with the classic work of American writer E. B White and his famous novel, Charlotte’s Web! Language & social skills After sharing the story of Charlotte's Web, we shared our ideas about the personality traits of Charlotte the spider with the children, who expressed themselves, using both Chinese and English words such as "friendly, brave, persistent, and compassionate." We also explored important events from the story through questioning, for example, “Do you still feel that spiders are ugly characters who cannot be our friends?” This not only develops children's language skills but also encourages them to develop an awareness of social values and norms. We let the students to have ownership over our learning environment, they were involved as co-creators of our learning space and created artistic props such as a pig, a small spider and the lettering for the word, “Charlotte,” to decorate our learning community, explicitly relating it to our focus story/theme – Charlotte’s Web. Charlotte, the little spider in front of the class window, had not visited our class for a long time. The children wondered if Charlotte had a new job to do? Or if she had gone to the field to make insect friends? The children‘s questions inspired our new direction of learning. Why not use the tally counting method we learnt to count the number of different types of insect in the school playground? Maybe we will find Charlotte at the same time. The children showed great passion and interest in recording and representing data to show the number of different types of insect that can be found in our school playground. After counting the number of insects with the tally counting method, we discussed with the children what kinds of insects were the most popular at YWIES playground? Which insects were the least popular? We found several spiders, etc. We found that the children were somewhat confused when categorising the data in this way. So, we wondered whether it would be clearer to them, if we analysed the tally results and represented it in the form of histograms. After introducing and explaining histograms to the children, they created their own bar charts! In this activity, we saw the children's flexible transformation and application of knowledge from the two statistical methods they had learnt, which played a positive role in the cultivation of their early mathematical reasoning ability. In continuing our theme, through music activities, we have been learning the song "Incy Wincy Spider." The children loved the clip of the little spider being washed down the water pipe by the rain. We thought that we could make a waterproof umbrella for the little spider so that the spider could keep its body clean and dry and no longer be afraid of the rain! So what kind of material should we choose to make an umbrella that is waterproof? From this, we approached a scientific inquiry into the properties of materials, we conducted an interesting experiment– making umbrellas for spiders using different materials with varying waterproof properties. The children began an exploration through hands-on practice, observation and comparison. This open-ended activity of allowed the children to reflect on the results and conclude their own findings. Consequently, allowing the children to become more active in their inquiries. Charlotte’s Web Theme Challenge To celebrate this wonderful topic, we reviewed our learning in the form of a quiz involving a series of questions and challenges, not only for the children, but also, for their parents. It was a truly warm and unique experience watching our children collaborate with their families in a friendly and competitive atmosphere. YWIES of Beijing ECE always adhered to inquiry-based learning. It's about being a researcher of learning. Our teachers take children to explore new knowledge from their interests and expand to multiple fields. In this process, children can think, discover and practice independently and reflective in nature.
A pacemaker is a medical device that delivers electricity to the heart via small electrodes, triggering the heart to beat. A normal human heart contains an intrinsic pacemaker, termed the sinoatrial (SA) node. This group of cells within the heart generates an electrical impulse usually 60 to 100 times per minute that stimulates the heart muscle to contract. In certain illnesses, or with age, these cells may become diseased or unable to perform their duty on a consistent basis. This results in bradycardia (or slow heart beats) which can be treated with the placement of an artificial pacemaker (see image on the right). Understanding the Pacemaker Placement Procedure Following patient sedation, a small incision (approximately 2-3 inches) is made beneath the collar bone. Electrodes (termed "leads") are then inserted into the subclavian vein and passed inside this vein to the heart (see image on the right). Electrodes have small screw-like coils on their tips that enable them to be secured into place within the heart muscle. The electrodes are then attached to the pacemaker generator which is placed underneath the skin. The skin surface is closed with sutures. Following implantation, pacemaker function is closely monitored on follow-up visits. During device checks, which may be performed in the physician's office or even at home with telephonic monitoring, detailed information about both the pacemaker (e.g. battery life or frequency of pacing) and the intrinsic heart (e.g. underlying rhythm) is able to be obtained and helps with ongoing management.
Although I, like many teachers, have issues with the Common Core Mathematics Standards, the standards for mathematical practice (SMP) are a step in the right direction. For many students, the content itself may be forgotten due to disuse, but consistent emphasis on SMP from kindergarten to twelfth grade will give them the ability to grapple with whatever mathematics they face beyond the walls of the classroom. My project, The Mathematics of Human Exploration, seeks to address the practices of mathematicians, while also developing an explorer mindset. If you are unfamiliar with SMP, head to this Common Core page, otherwise you can find an abbreviated list below: - Make sense of problems and persevere in solving them. - Reason abstractly and quantitatively. - Construct viable arguments and critique the reasoning of others. - Model with mathematics. - Use appropriate tools strategically. - Attend to precision. - Look for and make use of structure. - Look for and express regularity in repeated reasoning. These eight standards apply as much to pushing mathematical frontiers as exploring physical geography, partly because they are not mutually exclusive fields. Therefore, this project seeks to create a context for making these standards matter. Attending to precision matters less on a worksheet than it does on navigating in the wilderness as you seek to find the next water source. At the same time, trying to map Earth’s remaining wild spaces will be a lot harder without using the appropriate mathematical tool strategically. And the physical demands of exploring nature and communicating finds will lead to countless problems that will require reasoning and perseverance. All of this is meant to show that mathematics, exploration, and physical geography have significant overlap, which is what this project is all about.
Member of parliament A member of parliament (MP) is the representative in parliament of the people who live in their electoral district. The terms congressman/congresswoman or deputy are terms that mean the same thing used in other places or systems. Members of parliament usually make parliamentary groups (sometimes called caucuses) with members of the same political party. In most states, the prime minister and the government ministers who make up the cabinet, are often members of parliament.
Courses use a six-character course code for identification. The first five characters of the course code are set out by the Ministry of Education. The sixth character is used by school boards to identify a specific characteristic of the course. These three letters identify the subject, such as English, Arts, Business etc. This indicator is used to distinguish the grade level (or the level of English language proficiency for ESL and EDL students): 1 = Grade 9 2 = Grade 10 3 = Grade 11 4 = Grade 12 A,B,C,D,E = level of English proficiency This letter identifies the course type: D = Academic P = Applied L = Locally developed O = Open U = University C = College M = University/College E = Workplace This sixth character is sometimes added to identify a specific characteristic of the course: I = Immersion R = Regular V = E-learning Defining credit types A credit is granted in recognition of successful completion of a course for which a minimum of 110 hours of learning time has been scheduled. There is a set of 18 compulsory (mandatory) credits that students must successfully complete in order to meet the requirements for an Ontario Secondary School Diploma (OSSD). Students must successfully complete 12 optional (additional) credits from areas of interest and/or pathways. These credits will contribute to the 30-credit requirement for an OSSD. Defining course types There are several different course types in Grades 9-12. In grades 11 and 12, students will focus more on individual interests and identify and prepare for initial post-secondary goals. Academic courses (D) Academic courses in Grades 9 and 10 focus on the essential concepts of the discipline and additional materials.They develop students’ knowledge and skills by emphasizing theoretical and abstract thinking while incorporating practical applications as a basis for future learning and problem solving. Applied courses (P) Applied courses in Grades 9 and 10 focus on the essential concepts of the discipline. They develop students’ knowledge and skills by emphasizing practical, concrete applications of the essential concepts while incorporating theoretical elements as appropriate. Familiar, real-life situations are used to illustrate ideas, along with more opportunities to experience practical applications of the concepts they study. Open courses (O) Grades 9-10 Open courses in Grades 9 and 10 are offered in all subjects other than those offered as academic, applied and locally developed. For example, open courses are offered in visual arts, music and health and physical education, but not in English, mathematics, science, French as a second language, history or geography. An open course comprises a set of expectations that is suitable for all students and is not linked to any specific post-secondary destination. These courses are designed to provide students with a broad educational base that will prepare them for their studies in Grades 11 and 12 and for productive participation in society. Locally developed course (L) Locally developed compulsory credit courses are intended for students who require a measure of flexibility and support in order to meet the compulsory credit requirements in English, mathematics, and science for the Ontario Secondary School Diploma (OSSD) of Ontario Secondary School Certificate. These types of courses help prepare students for further study in courses from the curriculum policy documents for these disciplines. Interdisciplinary courses in Grade 11 or 12 provide an integrated approach to learning. These courses are developed by connecting different subjects through themes, issues or problems that require knowledge from the selected areas. For example, an interdisciplinary studies course in small business would integrate studies in technological design and business entrepreneurship. For specific interdisciplinary courses, see Student Services at your school. K courses (K) K courses consist of alternative expectations that are developed to help students with special education needs acquire knowledge and skills that are not represented in the Ontario curriculum. Because they are not part of a subject or course outlined in the provincial curriculum documents, alternative expectations are considered to constitute alternative programs or alternative courses. Some students may remain in secondary school for up to 7 years, with a planned Community Living pathway. Students may experience a specific K course subject area twice in one year and several times over many years. Each experience will be unique, with its own K course code and learning goals consistent with those recorded on the student’s IEP. At the secondary level, the student will not be granted a credit for the successful completion of a K Course that consists of alternative expectations. To transfer from Grade 9 Applied Math to Grade 10 Academic Math, a student must take the transfer course MPM1H. Please consult your guidance counsellor for information regarding this course description. Grade 11 and 12 specific course types Open courses (O) Grades 11-12 Open courses in Grades 11 and 12 are appropriate for all students. These courses allow students to broaden their knowledge and skills in a particular subject that may or may not be directly related to their post-secondary goals, but that reflect their interests. University (U) Grades 11-12 University preparation courses provide students with the knowledge and skills they need to meet university entrance requirements. Courses emphasize theoretical aspects of the subject and also consider related applications. College (C) Grades 11-12 College preparation courses provide students with the knowledge and skills needed to meet the entrance requirements for most college programs and possible apprenticeships. Courses focus on practical applications and also examine underlying theories. University/College (M) Grades 11-12 University/College preparation courses are offered to prepare students to meet the entrance requirements of certain university and college programs. They focus on both theory and practical applications. Workplace (E) Grades 11-12 Workplace preparation courses prepare students to move directly into the workplace after high school or to be admitted into select apprenticeship programs or other training programs in the community. Courses focus on employment skills and on practical workplace applications of the subject content. Many workplace preparation courses involve cooperative education and work experience placements, which allow students to get practical experience in a workplace. Selecting courses using myBlueprint All of the course offerings at the OCSB are connected to the mobile-friendly myBlueprint Education Planner. All of the course descriptions for courses offered at your school are available once logged in to your myBlueprint account. Students are required to use this online tool to make their course selections. By assigning specific tasks to complete at each grade level, the tool takes a step-by-step approach to helping students reflect upon their skills, explore their interests, and set their goals. Students should use it often to explore the options available to them at each stage of their educational journey. So log in and get started designing your future! Create a parent account Parents are welcome to create an account to explore the features of myBlueprint. Once you login, you can review the tool’s features, and can even link to your child’s account to view their planning progress. Do you need information in an alternate format? Call us at 613-224-4455 Ext 2306 if you are having trouble viewing information on our website. Email us at [email protected] if you need information presented to you in an alternate format. Tell us your name, what you are looking for, and how to get it to you. Course Descriptions Guide Set aside some dedicated time to review the course offerings, course descriptions, and prerequisites outlined in our online course descriptions pages, found in the table below. Please remember that not all courses are offered at every OCSB high school. Consult your myBlueprint account for the courses offerings at your school.
What is and isn’t Bullying What is and isn’t Bullying By Dru Ahlborg BRRC Executive Director Bullying is never fun, it’s a cruel and terrible thing to do to someone. If you are being bullied, it is not your fault. No one deserves to be bullied, ever. The 2019 National Center for Educational Statistics reports one out of every five (20.2%) students report being bullied. That number has increased from one out of seven students about a decade ago. Additionally, the same study reports that 41% of students who reported being bullied at school indicated that they think the bullying would happen again. Another alarming statistic states that only 46% of bullied students report notifying an adult at school about the incident. Our organization provides education about bullying and how to advocate for children who are targets of bullying. Knowing what bullying is and how to identify it is the first step in properly advocating for it to STOP. It is also important to identify what isn’t bullying and to properly label and help children navigate that behavior as well. The following is adapted from Barbara Coloroso’s book The Bully, The Bullied, and the Not-So-Innocent Bystander and will provide a quick review of what is bullying and also, what is NOT bullying. WHAT IS BULLYING? Barbara’s definition of bullying states, “bullying is conscious, willful, deliberate, offensive, malicious, or insulting activity that is intended to humiliate and harm the target while providing the perpetrator(s) pleasure in the target’s pain or misery. It often induces fear through further aggression and can create terror. It can be verbal, physical, and/or relational; it can have as its overlay race, ethnicity, religion, gender, sexuality, sexual identity, sexual orientation, physical or mental ability, weight, allergies, or economic status. It can be, and often is, persistent, continual, and repeated over time, but it does not have to be. Once is enough to constitute bullying.” Bullying will always include three markers: - Imbalance of power: The bully has more power and/or influence than the bullied. Power imbalances can include age, size, number of people participating, strength, verbally skills, higher on a social or economic ladder, ethnicity or gender. - Intent to harm: The bully intends to inflict emotional and/or physical pain, expects the action to cause hurt, and takes pleasure in causing and witnessing the hurt. The bully will mean to exclude, taunt and humiliate. - Threat of further aggression: Both the perpetrator(s) and the target know that the bullying can and probably will recur. It generally escalates over time with the acts of bullying becoming more hurtful and humiliating. When bullying escalates unabated, a fourth element is added: - Terror: When bullying continues to progress, terror is struck in the heart of the targeted child. Once terror is created, the perpetrator(s) can act without fear of recrimination or retaliation. The bullying target is unlikely to fight back or tell anyone about the bullying. WHAT ISN’T BULLYING Before we delve into what isn’t bullying, it is incredibly important to note that many times bullying is labeled as conflict. It is critical that an investigation be completed anytime bullying is reported. If the act includes an imbalance of power (real or perceived), intent to harm, and the threat of further aggression, the act is indeed bullying and needs to be stopped. Ignorant Faux Pas An ignorant faux pas occurs when a person uses a racist, sexist, ageist, physical-attribute or mental-ability stereotype statement to another person. A faux pas generally does not have the markers of bullying. Making these type of statements may be based on ignorance or apathy. Examples of these type of statements may include: “He runs like a girl.” “She’s dumb as a doornail.” “That’s so retarded.” Indeed, statements like these are hurtful. With children in particular, these crude and offensive statements can be learned at home, at school or through the media. Adults in charge can educate children about the stereotype or bias that the statement conveys and the impact it can have on others. Coaching can be used to let the child know that type of statement isn’t tolerated and to help them find more intelligent and creative ways to express themselves. These type of statements are crude and offensive and can set the stage for bullying, however at this point, it is not bullying. Aggression that is spontaneous, indiscriminate striking out that has no intended target is not bullying either. These type of aggressive acts are usually reactionary and emotionally charged. Many times they are related to a physical or mental disability. They should not be dismissed or excused. It should be noted that impulsive aggression may also be a response by a child who is being bullied and is not bullying, but a reaction to by bullying. Barbara Coloroso states that conflict is “normal, natural and a necessary part of our lives.”
Sponges are among the most mysterious creatures in our Coastal Bays. They stay stationary for most of their lives and look so much like a plant; it’s hard to believe they are an animal! However, these seemingly simple organisms play a big part in keeping marine ecosystems healthy, and we are lucky enough to have some right in our Coastal Bays! Sponges spend the first few days of their life as tiny larvae floating through the water until they can find a substrate that will serve as their forever home. It seems like a big commitment for a three-day-old, but it is an important moment in every sponge’s life. Hard surfaces such as rocks and pilings are most suitable for a sponge to anchor itself to. Once a sponge is anchored, they’re ready to grow! Despite the peculiar nature of sponges, they are quite simple creatures. They are invertebrates (meaning they have no backbone) and have no specialized organs, such as a heart or lungs. Their skeletons are made of a soft material called spongin, which is a form of collagen, and their skin is leathery with many pores. These pores serve an important function for the sponge, as they act as the entrances for its food. Sponges are filter feeders, which means they feed by taking water into their bodies and picking out the edible particles in the water. Edible particles for the sponge include plankton, viruses, detritus, and bacteria. They are also able to absorb nutrients and oxygen through their pores. Wastewater is then expelled from their bodies through an opening called the osculum. The process of filter-feeding helps to clean and clarify the water, which is why filter feeders are often considered ecosystem engineers! Other notorious filter feeders in the Coastal Bays include oysters, clams, mussels, sponges, and a small, silvery fish called the menhaden. While we appreciate sponges for simply existing, they also act as a food source and provide structured habitat for many sea creatures. It is always important to leave sponges be, as they are quite sensitive to their environment. Sponges are rarely found free-floating and rely on staying anchored to their substrate. Since sponges are filter feeders, they take in about 20 times their volume in water every minute. This means their pores can easily get blocked with stirred-up sediment or air if taken out of the water. All the water that sponges take in also makes them particularly sensitive to pollutants or toxins in the water, and they can be important indicators of water quality. Red Beard Sponge We have several species of sponge that live in the Coastal Bays, including the sulphur, or boring, sponge, red beard sponge, halichondria sponge, and the fig sponge. You may recognize the red beard sponge, as it is easy to spot if the water is clear enough. They are perhaps the easiest sponge to spot in our bays, as they boast a bright fiery orange color with many branching fingers, giving them a bushy appearance. Red beard sponges start out encrusting its substrate, making it look like a large splotch of orange-red. They can also withstand lower salinities and may be found in waters with higher inputs of freshwater. Fun fact! Red beard sponges can recreate themselves even after intense disturbances! This was discovered after scientists squeezed a sponge through a fine mesh and the separated cells crept along to find each other. From this newly reformed mass, the cells were able to reproduce and regrow the sponge, making it the first animal observed to exhibit this behavior. Sulphur (Boring) Sponge Though sulphur, or boring, sponges are not quite as easy to spot, there is an indicator that makes it easy to know one has been around in the past. Have you ever picked up a shell and seen many small holes in it? These holes are a tell-tale sign that a boring sponge was once present on it. Boring sponges create space in shells to grow throughout them by using acid to make tiny tunnels. While the sponge does not directly harm the animal living in the shell, the animal often dies as a result of a weakened shell. These sponges are a yellowish color and smell like sulphur when broken apart. Another yellow sponge present in our bays is the halichondria sponge. Since these sponges can withstand drying out more than other sponges, they can be found in very shallow intertidal zones. If you keep your eyes peeled, you may just be able to spot one walking along a shoreline or dock. Similar to the boring sponge, these sponges also give off a distinct smell that has been compared to gunpowder! It likely gives off this odor to ward off predators. Continuing on our sponge expedition, we have to look to slightly deeper waters for the finger sponge. These sponges can grow up to 30 centimeters tall and have long, velvety branches. These branches are what gives it the common name of the “mermaid’s glove”. Their colors can range anywhere from light brown, yellow, or reddish. There are many forms in which these branches can grow, which is why long ago when scientists were trying to classify them, they classified each branching arrangement as a different species! Sponges are certainly unlike many other beloved creatures in our Coastal Bays, but they are special and important all the same! I hope this information demystifies the sponge for you, as they deserve some recognition for their hard work clarifying our waters, and while doing it, they add a nice pop of color to the shallows. *Information obtained from the Maryland Coastal Bays Fisheries Identification Guide. Cailyn Joseph is a seasonal scientist and educator with the Maryland Coastal Bays Program. Cailyn works with both the science and education teams on programs such as wetland assessments, data entry, summer camp facilitation, lesson design, bird monitoring, public seining programs, and more. She is currently working on a fisheries heritage project called “Voices of the Coastal Bays” that will feature the history and culture of the OC Fisherman’s Marina located in West Ocean City, as well as highlight the vibrant stories of the local fishermen and women that operate out of the marina. Cailyn graduated from Salisbury University in May 2021 with a B.A. in environmental studies and a B.S. in biology.
Calculating the percent of a number is simple, but can be a bit tricky if you aren't careful. Luckily, it only requires a few basic operations to get to the solution: multiplication and division. If you haven't learned what percentage is yet, or would like a little refresher, feel free to check out our introduction to percentage page. To solve percentage problems, it may be useful to use a calculator. But, they can also be solved by hand or in your head (if you practice enough). So, how did we get to the solution that 41 percent of 183 = 75.03? - Step 1: As we know, a whole of something is equal to 100%. In this case, we want to find what 41% of 183 is. We know that 100% of 183 is, well, just 183. - Step 2: If 100% of 183 is 183, then we can get 1% of 183 by dividing it by 100. Let's do 183 / 100. This is equal to 1.83. Now we know that 1% is 1.83. - Step 3: Now that we know what 1% of 183 is, we just need to multiply it by 41 to get our solution! 1.83 times 41 = 75.03. That's all there is to it! When is this useful? Percentage is one of the most commonly used math concepts in day-to-day life. You can use it to calculate a gratuity on a restaurant bill, or to grade your score on an exam. It is useful to know your percentages well! Understanding Percentage Increase and Decrease Another important application of percentage calculations is understanding how to calculate percentage increases and decreases. When you want to find out how much something has increased or decreased in percentage terms, you can use the following formula: Percentage Change = (New Value - Old Value) / Old Value × 100% Percentage change is used in various fields such as finance, economics, and science to measure the growth or decline of a specific value. It helps us to better understand and compare the changes in values over time. Real-World Applications of Percentage Calculations Here are some common real-world applications of percentage calculations: - Discounts: When shopping, you may encounter discounts offered by stores. You can calculate the final price of an item after applying the percentage discount. - Interest Rates: Banks and financial institutions use percentage calculations to determine the interest rate on loans or savings accounts. Knowing how to calculate interest can help you make informed decisions about your finances. - Tax Rates: Tax rates are often expressed as a percentage. Being able to calculate the amount of tax you need to pay based on a given percentage can help you better manage your personal or business finances. - Data Analysis: In data analysis, you may need to calculate the percentage change between two values or the percentage of a specific value in a dataset. This can provide valuable insights and make data-driven decisions. Tips for Mastering Percentage Calculations Here are some tips to help you become proficient in percentage calculations: - Practice: The more you practice percentage problems, the better you'll get at them. Try solving different types of percentage problems to improve your skills. - Understand the Concept: Make sure you have a clear understanding of the concept of percentages and how they work. This will make it easier to apply percentage calculations to real-world problems. - Use Tools: There are many tools available, such as calculators and online resources, that can help you solve percentage problems. Make use of these tools to double-check your answers or to practice solving problems. Understanding how to compare percentages is important when making decisions or evaluating options. Let's use the example of 41% of 183 to demonstrate how to compare percentages. - Greater Than: Is 41% of 183 greater than some other percentage of 183? To compare, calculate the other percentage and see which value is larger. - Less Than: Is 41% of 183 less than another percentage of 183? Perform the same comparison as above, but check if the value is smaller instead. - Equal To: Is 41% of 183 equal to another percentage of 183? Calculate both percentages and compare the values to determine if they are equal. Percentage Increase and Decrease Percentage increase and decrease are essential concepts for understanding how values change over time. Here's how to calculate the percentage increase or decrease using the example of 41% of 183: - Percentage Increase: To calculate the percentage increase from an original value to a new value, divide the difference between the new and original values by the original value, and then multiply by 100. For example, if the original value was 183 and the new value is the result of a 41% increase, the calculation would be (((183 * (41/100)) + 183) - 183) / 183 * 100. - Percentage Decrease: To calculate the percentage decrease, follow the same process as percentage increase, but subtract the percentage instead of adding it. The calculation would be (183 - (183 - (183 * (41/100)))) / 183 * 100. Converting Percentages to Fractions and Decimals Percentages can be converted to fractions and decimals for various mathematical operations or to express values in different forms. Here's how to convert 41% to a fraction and a decimal: - Percentage to Fraction: To convert 41% to a fraction, simply write 41 as the numerator and 100 as the denominator. Then, simplify the fraction if possible. - Percentage to Decimal: To convert 41% to a decimal, divide 41 by 100. This will give you the decimal equivalent of 41%.
Discovering multiplication right after counting, addition, as well as subtraction is ideal. Young children discover arithmetic by way of a all-natural progression. This progress of understanding arithmetic is usually the adhering to: counting, addition, subtraction, multiplication, and lastly department. This statement results in the query why discover arithmetic within this series? Furthermore, why learn multiplication following counting, addition, and subtraction just before division? The subsequent facts respond to these questions: - Children learn counting initial by associating visual items with their hands. A perceptible instance: The number of apples are available inside the basket? A lot more abstract illustration is how outdated have you been? - From counting figures, another reasonable stage is addition combined with subtraction. Addition and subtraction tables can be very helpful teaching helps for children since they are graphic resources producing the transition from counting less difficult. - That ought to be learned following, multiplication or department? Multiplication is shorthand for addition. At this moment, kids have a organization knowledge of addition. For that reason, multiplication will be the following plausible kind of arithmetic to understand. Overview fundamentals of multiplication. Also, look at the basic principles how to use a multiplication table. We will evaluation a multiplication illustration. Employing a Multiplication Table, multiply several times a few and acquire a solution twelve: 4 x 3 = 12. The intersection of row 3 and line a number of of any Multiplication Table is twelve; a dozen will be the answer. For youngsters starting to learn multiplication, this is straightforward. They could use addition to resolve the trouble hence affirming that multiplication is shorthand for addition. Instance: 4 x 3 = 4 4 4 = 12. It is really an excellent guide to the Multiplication Table. A further reward, the Multiplication Table is visual and demonstrates to studying addition. In which can we begin understanding multiplication making use of the Multiplication Table? - First, get familiar with the table. - Start out with multiplying by one. Start off at row number 1. Move to line primary. The intersection of row 1 and column the first is the best solution: one. - Perform repeatedly these techniques for multiplying by a single. Multiply row a single by posts one via a dozen. The answers are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 correspondingly. - Perform repeatedly these actions for multiplying by two. Flourish row two by columns 1 by means of 5. The answers are 2, 4, 6, 8, and 10 respectively. - Let us bounce forward. Perform repeatedly these methods for multiplying by five. Flourish row 5 various by columns a single by way of 12. The responses are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, and 60 correspondingly. - Now allow us to boost the level of problems. Recurring these methods for multiplying by 3. Increase row about three by posts one by way of twelve. The responses are 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, and 36 correspondingly. - When you are confident with multiplication thus far, use a examination. Solve these multiplication difficulties in your head and after that assess your responses to the Multiplication Table: flourish six as well as two, flourish nine and 3, flourish 1 and eleven, multiply a number of and four, and multiply 7 and two. The trouble answers are 12, 27, 11, 16, and 14 respectively. Should you acquired a number of out of 5 issues proper, build your own multiplication exams. Compute the replies in your thoughts, and appearance them using the Multiplication Table.
Science at Christopher Hatton school Our aim is for all pupils to be informed, articulate and empowered: In science this means children are engaged and inspired by a well-designed sequential curriculum complimented by a wide range of enrichment opportunities. Science is highly valued as part of our rich curriculum and supports children to acquire the transferrable skills, knowledge and cultural capital they need to succeed in life. - Children develop the scientific knowledge and conceptual understanding they need in order to make sense of the world, through the specific disciplines of biology, chemistry and physics. - Children develop understanding of how to work scientifically through different types of scientific enquiries (comparative and fair testing; pattern seeking; identifying, classifying and grouping; observing over time; research using secondary sources) that help them to answer specific questions about the word around them. - Children acquire the scientific knowledge required to understand the uses and implications of science, today and in the future. - Through the teaching of specific science vocabulary and opportunities to discuss their learning, children develop their ability to think critically, evaluate and understand the world. - Children are given sentence stems within which they can frame their ideas and communicate clearly and accurately. - The discoveries, innovations and significant scientists introduced reflect the diversity of our community, enabling pupils to see themselves within the world of science. This supports the children’s belief that they too can be successful scientists. - The focus on having a growth mind-set is essential in the teaching of science, empowering children with the confidence to have a go, to learn from mistakes and to keep trying and improving. This is true of many significant scientists. - Children explore the purposes of science within a context as well as its meaning within their own life and future e.g. exploring the science of climate change or that just because science now enables us to do something, does that mean we should? - In school workshops (e.g. Zoolab) and visits to places of scientific interest (e.g. The Francis Crick Institute learning laboratory, Science Museum, Hampstead Heath) empower children to understand that the amazing resources we have in London belong to and are open to them. - An appreciation and understanding of how science influences all of our daily lives is essential to the children feeling empowered to make a positive difference to society. - All children, including those who have SEND or are disadvantaged, are supported to fully access the science curriculum. This may include additional adult support or use of visual/actions or Widget symbols. Structured sentence stems and taught vocabulary scaffold children in discussion. - The science scheme of work, developed by staff across all key stages, lays out the sequential steps to be taught so that new knowledge, skills and key science vocabulary build on what has been taught before and pupils can work towards clearly defined high quality outcomes. - Significant scientists, links to key texts (both fiction and non-fiction) and possible trips/workshops are outlined to ensure development of the children’s cultural capital. - Science is taught in a variety of ways depending on what is best for purpose. Where possible, meaningful links are made between science and other areas of the curriculum as part of topic learning. Science is taught in units, with lessons sometimes blocked to allow immersion in the process e.g. to complete a full investigation. - Each key strand of science across the three disciplines of biology (plants; animals including humans; living things and their habitats; evolution and inheritance), chemistry (everyday materials; uses of everyday materials; rocks; states of matter; properties and changes of materials) and physics (seasonal changes; Earth and space; light; sound; forces and magnets; electricity) is covered and revisited in line with the National Curriculum so that pupils retain and build upon prior learning. - Long term memory of key science knowledge objectives is supported by this repetition, as well as interleaving activities such as concept cartoons and mini quizzes. - Specific investigations are plotted for each year group, covering all age/phase appropriate enquiry types (comparative and fair testing; pattern seeking; identifying, classifying and grouping; observing over time; research using secondary sources). Plotting specific investigations across each year group ensures that a child will experience the whole range of enquiry types on their learning journey through the school. - National curriculum working scientifically objectives have been distilled into ten child friendly science skills - Asking scientific questions; Planning an enquiry; Observing closely; Taking measurements; Gathering and recording results; Presenting results; Interpreting results; Drawing conclusions (KS2 only); Making predictions (KS2 only); and Evaluating an enquiry (KS2 only). These are displayed in each classroom in order to ensure continuity across the school. Children use these to help them understand key investigation skills. - Sentence stems and the investigation frame are used to support children’s understanding of enquiry skills. E.g. stems to support interpretation of results by giving the frame into which the variable and measurable are inserted. The thicker the string the lower the pitch. Using frames supports all children, especially those with SEND to access the science curriculum - The science lead supports teachers and monitors standards by reviewing planning of units, teaching model lessons, team teaching, talking to children with their science learning and observing lessons. - Children at Christopher Hatton have retained key science knowledge. - Children can accurately use specific science vocabulary to explain their ideas and discuss their learning. - Children are interested in the world around them and have a set of key skills which they can use to investigate it. - Standards set in science are high and children aspire to them. - Children enjoy learning in science and value the subject. They understand its relevance and importance in a real world context and see themselves as scientists not only in school, but in the future.
1) Concentrating sunlight: A mirrored surface with high specular reflectivity is used to concentrate light from the sun on to a small cooking area. Depending on the geometry of the surface, sunlight can be concentrated by several orders of magnitude producing temperatures high enough to melt salt and smelt metal. For most household solar cooking applications, such high temperatures are not really required. Solar cooking products, thus, are typically designed to achieve temperatures of 150 °F (65 °C) (baking temperatures) to 750 °F (400 °C) (grilling/searing temperatures) on a sunny day. 2) Converting light energy to heat energy: Solar cookers concentrate sunlight onto a receiver such as a cooking pan. The interaction between the light energy and the receiver material converts light to heat. This conversion is maximized by using materials that conduct and retain heat. Pots and pans used on solar cookers should be matte black in color to maximize the absorption. 3) Trapping heat energy: It is important to reduce convection by isolating the air inside the cooker from the air outside the cooker. Simply using a glass lid on your pot enhances light absorption from the top of the pan and provides a greenhouse effect that improves heat retention and minimizes convection loss. This "glazing" transmits incoming visible sunlight but is opaque to escaping infrared thermal radiation. In resource constrained settings, a high-temperature plastic bag can serve a similar function, trapping air inside and making it possible to reach temperatures on cold and windy days similar to those possible on hot days. Different kinds of solar cookers use somewhat different methods of cooking, but most follow the same basic principles. Food is prepared as if for an oven or stove top. However, because food cooks faster when it is in smaller pieces, food placed inside a solar cooker is usually cut into smaller pieces than it might otherwise be. For example, potatoes are usually cut into bite-sized pieces rather than roasted whole. For very simple cooking, such as melting butter or cheese, a lid may not be needed and the food may be placed on an uncovered tray or in a bowl. If several foods are to be cooked separately, then they are placed in different containers. The container of food is placed inside the solar cooker, which may be elevated on a brick, rock, metal trivet, or other heat sink, and the solar cooker is placed in direct sunlight. Foods that cook quickly may be added to the solar cooker later. Rice for a mid-day meal might be started early in the morning, with vegetables, cheese, or soup added to the solar cooker in the middle of the morning. Depending on the size of the solar cooker and the number and quantity of cooked foods, a family may use one or more solar cookers. A solar oven is turned towards the sun and left until the food is cooked. Unlike cooking on a stove or over a fire, which may require more than an hour of constant supervision, food in a solar oven is generally not stirred or turned over, both because it is unnecessary and because opening the solar oven allows the trapped heat to escape and thereby slows the cooking process. If wanted, the solar oven may be checked every one to two hours, to turn the oven to face the sun more precisely and to ensure that shadows from nearby buildings or plants have not blocked the sunlight. If the food is to be left untended for many hours during the day, then the solar oven is often turned to face the point where the sun will be when it is highest in the sky, instead of towards its current position. The cooking time depends primarily on the equipment being used, the amount of sunlight at the time, and the quantity of food that needs to be cooked. Air temperature, wind, and latitude also affect performance. Food cooks faster in the two hours before and after the local solar noon than it does in either the early morning or the late afternoon. Large quantities of food, and food in large pieces, take longer to cook. As a result, only general figures can be given for cooking time. With a small solar panel cooker, it might be possible to melt butter in 15 minutes, to bake cookies in 2 hours, and to cook rice for four people in 4 hours. With a high performing parabolic solar cooker, you may be able to grill a steak in minutes. However, depending on local conditions and the solar cooker type, these projects could take half as long, or twice as long. It is difficult to burn food in a solar cooker. Food that has been cooked even an hour longer than necessary is usually indistinguishable from minimally cooked food. The exception to this rule is some green vegetables, which quickly change from a perfectly cooked bright green to olive drab, while still retaining the desirable texture. For most foods, such as rice, the typical person would be unable to tell how it was cooked from looking at the final product. There are some differences, however: Bread and cakes brown on their tops instead of on the bottom. Compared to cooking over a fire, the food does not have a smoky flavor. A box cooker has a transparent glass or plastic top, and it may have additional reflectors to concentrate sunlight into the box. The top can usually be removed to allow dark pots containing food to be placed inside. One or more reflectors of shiny metal or foil-lined material may be positioned to bounce extra light into the interior of the oven chamber. Cooking containers and the inside bottom of the cooker should be dark-colored or black. Inside walls should be reflective to reduce radiative heat loss and bounce the light towards the pots and the dark bottom, which is in contact with the pots. The box should have insulated sides. Thermal insulation for the solar box cooker must be able to withstand temperatures up to 150 °C (300 °F) without melting or out-gassing. Crumpled newspaper, wool, rags, dry grass, sheets of cardboard, etc. can be used to insulate the walls of the cooker. Metal pots and/or bottom trays can be darkened either with flat-black spray paint (one that is non-toxic when warmed), black tempera paint, or soot from a fire. The solar box cooker typically reaches a temperature of 150 °C (300 °F). This is not as hot as a standard oven, but still hot enough to cook food over a somewhat longer period of time. Panel solar cookers are inexpensive solar cookers that use reflective panels to direct sunlight to a cooking pot that is enclosed in a clear plastic bag. Solar Oven science experiments are regularly done as projects in high schools and colleges, such as the "Solar Oven Throwdown" at the University of Arizona. These projects prove that it is possible to both achieve high temperatures, as well as predict the high temperatures using mathematical models. Parabolic solar cookers concentrate sunlight to a single point. When this point is focused on the bottom of a pot, it can heat the pot quickly to very high temperatures which can often be comparable with the temperatures achieved in gas and charcoal grills. These types of solar cookers are widely used in several regions of the world, most notably in China and India where hundreds of thousands of families currently use parabolic solar cookers for preparing food and heating water. Some parabolic solar cooker projects in China abate between 1-4 tons of carbon dioxide per year and receive carbon credits through the Clean Development Mechanism (CDM) and Gold Standard. Some parabolic solar cookers incorporate cutting edge materials and designs which lead to solar energy efficiencies >90%. Others are large enough to feed thousands of people each day, such as the solar bowl at Auroville in India, which makes 2 meals per day for 1,000 people. If a reflector is axially symmetrical and shaped so its cross-section is a parabola, it has the property of bringing parallel rays of light (such as sunlight) to a point focus. If the axis of symmetry is aimed at the sun, any object that is located at the focus receives highly concentrated sunlight, and therefore becomes very hot. This is the basis for the use of this kind of reflector for solar cooking. Paraboloids are compound curves, which are more difficult to make with simple equipment than single curves. Although paraboloidal solar cookers can cook as well as or better than a conventional stove, they are difficult to construct by hand. Frequently, these reflectors are made using many small segments that are all single curves which together approximate compound curves. Although paraboloids are difficult to make from flat sheets of solid material, they can be made quite simply by rotating open-topped containers which hold liquids. The top surface of a liquid which is being rotated at constant speed around a vertical axis naturally takes the form of a paraboloid. Centrifugal force causes material to move outward from the axis of rotation until a deep enough depression is formed in the surface for the force to be balanced by the levelling effect of gravity. It turns out that the depression is an exact paraboloid. (See Liquid mirror telescope.) If the material solidifies while it is rotating, the paraboloidal shape is maintained after the rotation stops, and can be used to make a reflector. This rotation technique is sometimes used to make paraboloidal mirrors for astronomical telescopes, and has also been used for solar cookers. Devices for constructing such paraboloids are known as rotating furnaces. Paraboloidal reflectors generate high temperatures and cook quickly, but require frequent adjustment and supervision for safe operation. Several hundred thousand exist, mainly in China. They are especially useful for individual household and large-scale institutional cooking. A Scheffler cooker (named after its inventor, Wolfgang Scheffler) uses a large ideally paraboloidal reflector which is rotated around an axis that is parallel with the earth's using a mechanical mechanism, turning at 15 degrees per hour to compensate for the earth's rotation. The axis passes through the reflector's centre of mass, allowing the reflector to be turned easily. The cooking vessel is located at the focus which is on the axis of rotation, so the mirror concentrates sunlight onto it all day. The mirror has to be occasionally tilted about a perpendicular axis to compensate for the seasonal variation in the sun's declination. This perpendicular axis does not pass through the cooking vessel. Therefore, if the reflector were a rigid paraboloid, its focus would not remain stationary at the cooking vessel as the reflector tilts. To keep the focus stationary, the reflector's shape has to vary. It remains paraboloidal, but its focal length and other parameters change as it tilts. The Scheffler reflector is therefore flexible, and can be bent to adjust its shape. It is often made up of a large number of small plane sections, such as glass mirrors, joined together by flexible plastic. A framework that supports the reflector includes a mechanism that can be used to tilt it and also bend it appropriately. The mirror is never exactly paraboloidal, but it is always close enough for cooking purposes. Sometimes the rotating reflector is located outdoors and the reflected sunlight passes through an opening in a wall into an indoor kitchen, often a large communal one, where the cooking is done. Paraboloidal reflectors that have their centres of mass coincident with their focal points are useful. They can be easily turned to follow the sun's motions in the sky, rotating about any axis that passes through the focus. Two perpendicular axes can be used, intersecting at the focus, to allow the paraboloid to follow both the sun's daily motion and its seasonal one. The cooking pot stays stationary at the focus. If the paraboloidal reflector is axially symmetrical and is made of material of uniform thickness, its centre of mass coincides with its focus if the depth of the reflector, measured along its axis of symmetry from the vertex to the plane of the rim, is 1.8478 times its focal length. The radius of the rim of the reflector is 2.7187 times the focal length. The angular radius of the rim, as seen from the focal point, is 72.68 degrees. Parabolic troughs are used to concentrate sunlight for solar-energy purposes. Some solar cookers have been built that use them in the same way. Generally, the trough is aligned with its focal line horizontal and east-west. The food to be cooked is arranged along this line. The trough is pointed so its axis of symmetry aims at the sun at noon. This requires the trough to be tilted up and down as the seasons progress. At the equinoxes, no movement of the trough is needed during the day to track the sun. At other times of year, there is a period of several hours around noon each day when no tracking is needed. Usually, the cooker is used only during this period, so no automatic sun tracking is incorporated into it. This simplicity makes the design attractive, compared with using a paraboloid. Also, being a single curve, the trough reflector is simpler to construct. However, it suffers from lower efficiency. It is possible to use two parabolic troughs, curved in perpendicular directions, to bring sunlight to a point focus as does a paraboloidal reflector.The incoming light strikes one of the troughs, which sends it toward a line focus. The second trough intercepts the converging light and focuses it to a point. Compared with a single paraboloid, using two partial troughs has important advantages. Each trough is a single curve, which can be made simply by bending a flat sheet of metal. Also, the light that reaches the targeted cooking pot is directed approximately downward, which reduces the danger of damage to the eyes of anyone nearby. On the other hand, there are disadvantages. More mirror material is needed, increasing the cost, and the light is reflected by two surfaces instead of one, which inevitably increases the amount that is lost. The two troughs are held in a fixed orientation relative to each other by being both fixed to a frame. The whole assembly of frame and troughs has to be moved to track the sun as it moves in the sky. Commercially made cookers that use this method are available. Spherical reflectors operate much like paraboloidal reflectors, such that the axis of symmetry is pointed towards the sun so that light is concentrated to a focus. However, the focus of a spherical reflector will not be a point focus because it suffers from a phenomenon known as spherical aberration. Some concentrating dishes (such as satellite dishes) that do not require a precise focus opt for a spherical curvature over a paraboloid. If the radius of the rim of spherical reflector is small compared with the radius of curvature of its surface (the radius of the sphere of which the reflector is a part), the reflector approximates a paraboloidal one with focal length equal to half of the radius of curvature. Evacuated tube solar cookers are essentially a vacuum sealed between two layers of glass. The vacuum allows the tube to act both as a "super" greenhouse and an insulator. The central cooking tube is made from borosilicate glass, which is resistant to thermal shock, and has a vacuum beneath the surface to insulate the interior. The inside of the tube is lined with copper, stainless steel, and aluminum nitrile to better absorb and conduct heat from the sun's rays. Some vacuum tube solar cookers incorporate lightweight designs which allow great portability (such as the GoSun stove) Portable vacuum tube cookers such as the GoSun allow users to cook freshly caught fish on the beach without needing to light a fire. Advai eat bpears i peter griffin temperatures above 290 °C (550 °F). They can be used to grill meats, stir-fry vegetables, make soup, bake bread, and boil water in minutes.Conventional solar box cookers attain temperatures up to 165 °C (325 °F). They can sterilize water or prepare most foods that can be made in a conventional oven or stove, including bread, vegetables and meat over a period of hours. Solar cookers use no fuel. This saves cost as well as reducing environmental damage caused by fuel use. Since 2.5 billion people cook on open fires using biomass fuels, solar cookers could have large economic and environmental benefits by reducing deforestation. When solar cookers are used outside, they do not contribute inside heat, potentially saving fuel costs for cooling as well. Any type of cooking may evaporate grease, oil, and other material into the air, hence there may be less cleanup. DisadvantagesSolar cookers are less useful in cloudy weather and near the poles (where the sun is low in the sky or below the horizon), so an alternative cooking source is still required in these conditions. Solar cooking advocates suggest three devices for an integrated cooking solution: a) a solar cooker; b) a fuel-efficient cookstove; c) an insulated storage container such as a basket filled with straw to store heated food. Very hot food may continue to cook for hours in a well-insulated container. With this three-part solution, fuel use is minimized while still providing hot meals at any hour, reliably. Some solar cookers, especially solar ovens, take longer to cook food than a conventional stove or oven. Using solar cookers may require food preparation start hours before the meal. However, it requires less hands-on time during the cooking, so this is often considered a reasonable trade-off. Cooks may need to learn special cooking techniques to fry common foods, such as fried eggs or flatbreads like chapatis and tortillas. It may not be possible to safely or completely cook some thick foods, such as large roasts, loaves of bread, or pots of soup, particularly in small panel cookers; the cook may need to divide these into smaller portions before cooking. Some solar cooker designs are affected by strong winds, which can slow the cooking process, cool the food due to convective losses, and disturb the reflector. It may be necessary to anchor the reflector, such as with string and weighted objects like bricks . Cardboard, aluminium foil, and plastic bags for well over 10,000 solar cookers have been donated to the Iridimi refugee camp and Touloum refugee camps in Chad by the combined efforts of the Jewish World Watch, the Dutch foundation KoZon, and Solar Cookers International. The refugees construct the cookers themselves, using the donated supplies and locally purchased Arabic gum. It has also significantly reduced the amount of time women spend tending open fires each day, with the results that they are healthier and they have more time to grow vegetables for their families and make handicrafts for export. By 2007, the Jewish World Watch had trained 4,500 women and had provided 10,000 solar cookers to refugees. The project has also reduced the number of foraging trips by as much as 70 percent, thus reducing the number of attacks. Some Gazans have started to make solar cookers made from cement bricks and mud mixed with straw and two sheets of glass. About 40 to 45 Palestinian households reportedly have started using these solar cookers., including some made with mirrors. Bysanivaripalle, a silk-producing village that is 125 km (78 mi) northwest of Tirupati in the Indian state of in Andhra Pradesh, is the first of its kind: an entire village that uses only solar cooking. Thousands of parabolic solar cookers produced by One Earth Designs are used on the Himalayan Plateau in China to reduce dependence on biomass fuel like wood and yak dung.
35 years later: Voyager 1's journey to the edge of the solar system The Voyager 1 is pretty unremarkable, technologically speaking. For starters, it only has 68 kilobytes of memory; an iPod Nano has over 16 million. The probe's radio system, imaging system, and infrared interferometer spectrometer fizzled out long ago. Voyager 1 even has a Golden Record onboard (remember those?) featuring a voice message from then-President Jimmy Carter, just in case any intelligent life forms come across it. What is remarkable is the fact that the spacecraft is still chugging along after 35 years in space. Its mission, when it first launched from Cape Canaveral on Sept. 5, 1977, was supposed to conclude in 1980 after it took a few close-up shots of Saturn and its moon. But it has enough plutonium to power it easily into the 2020s. Yesterday, a new study claimed that the probe had exited the confines of our solar system, which would make it the first ever Earth-built object to do so. (Its sister probe, Voyager 2, isn't too far behind.) According to Rebecca J. Rosen at The Atlantic, Voyager 1 indeed experienced "dramatic changes" last August when it demonstrated a "sharp drop-off in solar particles hitting the probe." But it hasn't left the solar system just yet, according to the government. The Voyager team is aware of reports today that NASA's Voyager 1 has left the solar system. It is the consensus of the Voyager science team that Voyager 1 has not yet left the solar system or reached interstellar space. In December 2012, the Voyager science team reported that Voyager 1 is within a new region called 'the magnetic highway' where energetic particles changed dramatically. A change in the direction of the magnetic field is the last critical indicator of reaching interstellar space and that change of direction has not yet been observed. [Mashable] So where is it, exactly? In all likelihood it's 11 billion miles away from our sun, just outside an area of space called the heliosphere, a blustery region stirred by our sun and its solar winds. "It's outside the normal heliosphere," Bill Webber, a professor of astronomy at New Mexico State University, tells the Guardian. "I would say that. We're in a new region. And everything we're measuring is different and exciting." Translation: We're not quite out of the woods yet. But we're getting close. An annotated image of the Voyager space probe and all its various parts. (NASA/Hulton Archive/Getty Images)
Damage to the cartilage in whatever joint of the body can have harmful effects on the function of that joint. But first of all what is cartilage? Cartilage is known as connective tissue in several parts of the body. Though it is a hard and supple material, it is relatively easy to damage. Cartilage is a rubbery and beautiful tissue which acts as a pad between the bones of joints. Persons with cartilage damage usually experience joint pain, stiffness, and swelling. Now, did you know that Cartilage has numerous functions in the human body? Persons with cartilage damage usually experience joint pain, stiffness, and swelling. Below are some features of cartilage: - Lessens friction and performs as a cushion among joints and helps upkeep our weight when we are stretching, bending, and running. - Hold bones together, for example, the bones found in the ribcage. - Certain body parts are made almost entirely of cartilage, for instance, the exterior portions of our ears. - In kids, the ends of the long bones are a form of cartilage, which ultimately turns into bone. When damage to the cartilage happens, the patient will eventually experience severe pain, inflammation, and some grade of disability – this is recognized as articular cartilage. NIH or (National Institutes of Health) denotes that, one-third of American adults aged over 45 experiences this kind of knee pain. What are the symptoms? Damage to the cartilage in a joint (articular cartilage damage) will cause: - Inflammation or swelling – the area swells, becomes warmer compared to other parts of the body and is tender, aching, and soring. - Range limitation – as the damage improves, the affected limb will not move as smoothly and effortlessly as before. Articular cartilage damage most usually happens in the knee; however, the elbow, ankle, wrist, ankle, hip joint, and shoulder can also be affected. In significant cases of damage to the cartilage, a piece of cartilage can disrupt, and the area may become spotty and have a discolored appearance. Always be careful when practicing sports. If you have a weak ankle, wear a tennis ankle brace in order to avoid getting damage to your cartilage in the future. How to diagnose it? Spotting the difference between cartilage damage in the knee and a sprain, or else ligament damage, is a bit hard because the symptoms can be alike. But modern non-invasive tests do the work more stress-free than it used to be. Right after the physical examination, the doctor will order the following diagnostic tests: - Magnetic resonance imaging (MRI) – In this examination, there is a magnetic field and radio waves used to take a detailed image of the body. Though useful, an MRI cannot always distinguish cartilage damage. - Arthroscopy – Here there is a tube-like instrument or arthroscope inserted into a joint to inspect and repair it. This process can help to define the degree of cartilage damage. What are the complications? You should not leave broken cartilage in the joint untreated, mainly if it is a weight-bearing one, for instance the knee because it can ultimately become so damaged that the individual may not be able to walk. Apart from rigidity, the pain may gradually get worse. Tiny articular cartilage defects can in time lead to osteoarthritis if given sufficient time. Suggested Exercises for the patient Patients with this type of injury are recommended to do exercises that are appropriate for an individual to reinforce the muscles around the joint. These exercises will decrease pressure on the area with the injured cartilage. The Arthritis Foundation suggests: - tender stretching to uphold a range of motion and flexibility - aerobic exercises and endurance training are advised to attain or maintain a healthy weight and progress mood and stamina. - firming exercises to accumulate the muscles around the joints Although practice gives several benefits, it seems improbable to result in a regeneration of cartilage.
Marx and his coauthor, Friedrich Engels, begin The Communist Manifesto with the famous and provocative statement that the “history of all hitherto existing societies is the history of class struggle.” They argue that all changes in the shape of society, in political institutions, in history itself, are driven by a process of collective struggle on the part of groups of people with similar economic situations in order to realize their material or economic interests. These struggles, occurring throughout history from ancient Rome through the Middle Ages to the present day, have been struggles of economically subordinate classes against economically dominant classes who opposed their economic interests—slaves against masters, serfs against landlords, and so on. The modern industrialized world has been shaped by one such subordinate class—the bourgeoisie, or merchant class—in its struggle against the aristocratic elite of feudal society. Through world exploration, the discovery of raw materials and metals, and the opening of commercial markets across the globe, the bourgeoisie, whose livelihood is accumulation, grew wealthier and politically emboldened against the feudal order, which it eventually managed to sweep away through struggle and revolution. The bourgeoisie have risen to the status of dominant class in the modern industrial world, shaping political institutions and society according to its own interests. Far from doing away with class struggle, this once subordinate class, now dominant, has replaced one class struggle with another. The bourgeoisie is the most spectacular force in history to date. The merchants’ zeal for accumulation has led them to conquer the globe, forcing everyone everywhere to adopt the capitalist mode of production. The bourgeois view, which sees the world as one big market for exchange, has fundamentally altered all aspects of society, even the family, destroying traditional ways of life and rural civilizations and creating enormous cities in their place. Under industrialization, the means of production and exchange that drive this process of expansion and change have created a new subordinate urban class whose fate is vitally tied to that of the bourgeoisie. This class is the industrial proletariat, or modern working class. These workers have been uprooted by the expansion of capitalism and forced to sell their labor to the bourgeoisie, a fact that offends them to the core of their existence as they recall those workers of earlier ages who owned and sold what they created. Modern industrial workers are exploited by the bourgeoisie and forced to compete with one another for ever-shrinking wages as the means of production grow more sophisticated. The factory is the arena for the formation of a class struggle that will spill over into society at large. Modern industrial workers will come to recognize their exploitation at the hands of the bourgeoisie. Although the economic system forces them to compete with one another for ever shrinking wages, through common association on the factory floor they will overcome the divisions between themselves, realize their common fate, and begin to engage in a collective effort to protect their economic interests against the bourgeoisie. The workers will form collectivities and gradually take their demands to the political sphere as a force to be reckoned with. Meanwhile, the workers will be joined by an ever-increasing number of the lower middle class whose entrepreneurial livelihoods are being destroyed by the growth of huge factories owned by a shrinking number of superrich industrial elites. Gradually, all of society will be drawn to one or the other side of the struggle. Like the bourgeoisie before them, the proletariat and their allies will act together in the interests of realizing their economic aims. They will move to sweep aside the bourgeoisie and its institutions, which stand in the way of this realization. The bourgeoisie, through its established mode of production, produces the seeds of its own destruction: the working class. The Communist Manifesto was intended as a definitive programmatic statement of the Communist League, a German revolutionary group of which Marx and Engels were the leaders. The two men published their tract in February 1848, just months before much of Europe was to erupt in social and political turmoil, and the Manifesto reflects the political climate of the period. In the summer of that year, youthful revolutionary groups, along with the urban dispossessed, set up barricades in many of Europe’s capitals, fighting for an end to political and economic oppression. While dissenters had been waging war against absolutism and aristocratic privilege since the French Revolution, many of the new radicals of 1848 set their sights on a new enemy that they believed to be responsible for social instability and the growth of an impoverished urban underclass. That enemy was capitalism, the system of private ownership of the means of production. The Manifesto describes how capitalism divides society into two classes: the bourgeoisie, or capitalists who own these means of production (factories, mills, mines, etc.), and the workers, who sell their labor power to the capitalists, who pay the workers as little as they can get away with. Although the Communist League was itself apparently too disorganized to contribute much to the 1848 uprisings, the Communist Manifesto is a call to political action, containing the famous command, “Workers of the world unite!” But Marx and Engels also used the book to spell out some of the basic truths, as they saw it, about how the world works. In the Communist Manifesto we see early versions of essential Marxist concepts that Marx would elaborate with more scientific rigor in mature writings such as Das Kapital. Perhaps most important of these concepts is the theory of historical materialism, which states that historical change is driven by collective actors attempting to realize their economic aims, resulting in class struggles in which one economic and political order is replaced by another. One of the central tenets of this theory is that social relationships and political alliances form around relations of production. Relations of production depend on a given society’s mode of production, or the specific economic organization of ownership and division of labor. A person’s actions, attitudes, and outlook on society and his politics, loyalties, and sense of collective belonging all derive from his location in the relations of production. History engages people as political actors whose identities are constituted as exploiter or exploited, who form alliances with others likewise identified, and who act based on these identities. Take a Study Break Every Shakespeare Play Summed Up in a Quote from The Office Every Book on Your English Syllabus, Summed Up in Marvel Quotes A Roundup of the Funniest Great Gatsby Memes You'll Ever See QUIZ: How Many of These Literary Jeopardy! Questions Can You Answer Correctly? 7 "Crazy" Women in Literature Who Were Actually Being Totally Reasonable Honest Names for All the Books on Your English Syllabus QUIZ: Are You a Hero, a Villain, or an Anti-Hero?
This repository/course is intended to familiarize the participants with the main structural properties of PDE (Present-Day English) grammar. It starts with a review of linguistic approaches to grammar and then discusses the central grammatical aspects of PDE sentence structure: syntactic categories and syntactic functions. A unit on PDE orthography concludes the course. Thus, it is the class for everyone who is involved in teaching or studying English in one way or another. This repository/course covers the history of the English language from its remote Indo-European origins (and even before) to the present day. It provides substantial information about the English language at different periods and applies the main theoretical and technical concepts of historical linguistics, taking into account recent work in historical and general linguistics. To satisfy the needs of the current secondary school curriculum in many countries, its main focus, however, is Early Modern English. In order to successfully apply the main concepts of historical linguistics it is recommended to work through the VLC202 repository "Historical Linguistics" beforehand or in parallel with VLC203. Special emphasis will be put on practical aspects, such as, reading and the analysis of texts taken from different periods of English.
Compound predicate definition: A compound predicate consists of two or more verbs or verb phrases that are joined by a conjunction. What is a Compound Predicate? What does compound predicate mean? A compound predicate consists of two or more verbs or verb phrases that are joined by a conjunction. A compound predicate provides two or more details about the same subject. These details must use more than one verb or verb phrase. The verbs or verb phrases are joined by a conjunction. Simple Predicate Example: - Sari visited South Africa. This example only has one verb/verb phrase, “visited South Africa.” Compound Predicate Example: - Sari visited South Africa and met her extended family. This example has two verb phrases, “visited South Africa” and “met her extended family.” The conjunction, “and” connects the two verb phrases. Subject and Predicate: What is the Difference? A subject is the noun “doing” the action in the sentence. A predicate is the verb that the subject (noun) is “doing” in the sentence. In its most basic form, a sentence may have just two words, a subject and a predicate. - I swam. - In this example, “I” is the subject and “swam” is the predicate. Modifiers may be added to the sentence to make it more complex. - I swam yesterday. - This sentence adds an adverb to tell when I swam. Importance of Compound Predicates In general, compound predicates combine similar ideas or concepts and make writing (and speaking) more efficient, effective, and concise. Example Without Compound Predicate: - Tomorrow, I will go to school. I will also attend soccer practice. I will also complete my assignments. These sentences are redundant and unnecessary. A compound predicate will join these ideas to make the writing less wordy. Example with Compound Predicate: - Tomorrow, I will go to school, attend soccer practice, and complete my assignments. This example is much more efficient and concise. Redundancy is avoided by adding the compound predicate. The compound predicate is necessary for writing and speaking well in English. Compound Predicates Have One Subject It is important to note that compound subjects and predicates are different things. Additionally, compound sentences and predicates are different things. Compound predicates have only one subject. That is, the same subject is “doing” more than one verb or verb phase. Compound Predicate Examples: - The cat scratched and meowed at the door. In this example, “the cat” is the subject. He is completing the two actions, “scratched and meowed.” - Regulators in San Francisco have blamed the website for depleting the supply of rental housing and attempted to impose restrictions on its growth. –The Wall Street Journal In this example, the regulators “have blamed” and “attempted to impose restrictions.” This sentence has two predicates but not two subjects. A compound predicate is different from a compound sentence. A compound sentence contains more than one independent clause (more than one subject) joined by a conjunction. Compound Sentence Examples: - The cat meowed, and the dog barked. In this example, there are two subjects, “the cat” and “the dog.” Furthermore, there are two independent clauses as each subject also has a predicate (“meowed” and “barked,” respectively). - That pattern is good for cities, but it raises difficult questions about who will be able to afford them in the future. –The Washington Post In this example, there are, again, two subjects, “that pattern” and “it.” A compound subject is different from a compound predicate and a compound sentence. A compound subject is made up of one independent clause with two subjects “doing” the action. Compound Subject Example: - She and I went to the mall. In this example, there are two subjects, “she” and “I.” Here are a few example sentences where you can determine if they contain compound subjects or compound predicates. Which of the following sentences has a compound predicate? - Before I go to bed, I brush my teeth and wash my face. - The cardiologist said Ellie should reduce her sugar intake and increase her level of exercise. - My mother arrived at noon, but the party started at ten. See answers below. Summary: What are Compound Predicates? Define compound predicate: the definition of compound predicate is a predicate that has two or more verbs or verb phrases. In summary, a compound predicate: - is a part of an independent clause - contains one subject and multiple verbs/verb phrases - joins the verbs/verb phrases with a conjunction Also, it is important to remember that a compound subject and predicate are not the same thing. 1. Compound Predicate 2. Compound Predicate 3. Compound Subject
Scuba diving is swimming underwater using SCUBA – Self Contained Underwater Breathing Apparatus. Using a cylinder of compressed gas to breathe (usually air, but sometimes other gases), scuba divers can stay underwater much longer than would be possible by just holding their breath – for hours or even days! With the assistance of equipment such as scuba masks, breathing regulators, buoyancy devices, fins, and gauges scuba divers can explore the underwater world. Modern scuba diving is very safe and easy to learn. All basic skills can be learned in as little as three days.
Background Causes of the American Revolution Differentiated Instruction Lesson Plan Background Causes of the American Revolution The “Background Causes of the American Revolution” differentiated instruction lesson plan uses a PowerPoint for the mini-lesson to deliver the content. It consists of five economic factors and five political factors. Each of the background causes is explained in the PowerPoint. An example of what to expect on each slide is provided below for the Triangle Trade. The economic factors are: - Favorable Balance of Trade - Triangular Trade Triangular Trade: A trade between the America’s, Europe and Africa combining European capital (in the first leg) with African labor (in the second) and British-colony resources (in the third): *From Europe – manufactured goods: copper, textiles, silks imported from Asia, glassware, ammunition, guns, knives and other finished products. In Africa, sailors unloaded European goods and filled the ships with indigo and human cargo: men, women and children. *The “middle passage” brought the newly enslaved Africans to the Americas or to Caribbean islands. *At least ten percent of the captives died en route due to unbelievably bad conditions. When the ships encountered fierce weather, casualties were higher. *From the America’s: raw materials, returned to Europe. Sugar, coffee, tobacco and – especially – cotton were processed in British factories. Those materials provided workers with jobs and business owners with profits. - Rise of an influential business community in the colonies - Cost of colonial wars against the French Economic Factors were Background Causes of the American Revolution The Political factors are: - The role of the British Civil War - Periods of political freedom in the colonies - Impact of the French and Indian War - Political thought of the Enlightenment - New social relationships between European powers and the American colonies Political Factors including the Albany Plan of Union were background causes of the American Revolution To find out more about the differentiated instruction lesson plan, click on the links below: Where can I find the Background Causes of the American Revolution differentiated instruction lesson plan? What is included in the “Background Causes of the American Revolution” differentiated instruction lesson plan? You can find all of my social studies differentiated instruction lesson plans and PowerPoint by clicking on the link below: US History Lesson Plans Kasha Mastrodomenico (Connect with me on Linkedin!)
1937 Nobel Prize in Physics Clinton J Davisson Proving the wave nature of matter through the experimental discovery of the diffraction of electrons by crystals. Clinton J Davisson shared the 1937 Nobel Prize in Physics with George Thomson, who had independently shown that the de Broglie hypothesis was correct. The Nobel Prize was the first awarded to a Bell Labs researcher. The Royal Swedish Academy recognized their work by writing, “The investigation methods that you and Professor Thomson have elaborated and the further research work carried out by both of you have provided science with a new, exceedingly important instrument for examining the structure of matter, an instrument constituting a very valuable complement to the earlier method which makes use of X-ray radiation. The new investigations have already furnished manifold new, significant results within the fields of physics and chemistry and of the practical application of these sciences.” In the early years of the 20th century, a conceptual revolution was taking place in physics and it would fundamentally change the way scientists looked at the world. The seeds of this revolution were sown by German physicist Max Planck, who, while studying radiation from hot objects, had postulated that energy came in discrete packets called quanta. In 1905, the young Albert Einstein explained the photoelectric effect, a puzzling phenomenon in which metals eject electrons in response to incident light, using the idea of quantized light particles. Physicist Ernest Rutherford was investigating the structure of the nucleus simultaneously in England. Through Rutherford’s experiments, it became clear that atoms consisted of massive, positively charged nuclei surrounded by less massive, negatively charged electrons. In 1912, one of Rutherford’s students, Niels Bohr, incorporated quanta into a description of the physics of the atom. Bohr’s model of the hydrogen atom—where the electrons orbited the positively charged nucleus in discrete energy levels—proved to be very successful, and his ideas were extended to form the basis of quantum mechanics. One of the cornerstones of modern physics, quantum mechanics is considered to be among the most successful theories of science. Bell Labs research scientist Clinton Davisson had been following the developments in quantum mechanics from the time he was a student. In 1923, Louis de Broglie, observing that light has some of the properties of ordinary particles, inverted the argument: If light can be a particle, why can’t a particle be a wave? He constructed a theory that extended the results to all particles. The de Broglie hypothesis implied that wave behavior was a universal property of matter. In 1925, Davisson and his Bell Labs colleague Lester Germer began a series of experiments that would, within two years, incontestably prove de Broglie’s hypothesis of the wave nature of matter. “Davisson’s thoroughness was a most outstanding characteristic. He planned every experiment in the greatest detail before undertaking it. The precision of this planning is almost unbelievable.” — Mervin Kelly In his hypothesis, de Broglie found a simple relation between the velocity of a particle and the wavelength associated with that particle: the greater the particle’s velocity, the shorter the wavelength. Thus, if the velocity of the particle is known, it is possible to calculate, by means of de Broglie’s formula, the wavelength, and if the wavelength is known, it is possible to calculate the velocity of the particle. For their experiments, Davisson and Germer used a cubic nickel block that was placed inside a glass vacuum tube containing an electrode. The electrode bombarded the nickel block with electrons. Using a semicircular compass at the center of the tube, Davisson and Germer measured the angle at which the electrons were scattered by the nickel. Initially, the electrons behaved as particles, since the atoms in the nickel block were arranged randomly. However, after an accidental contamination, the physicists baked the nickel at a high temperature to get rid of impurities. This also forced the nickel atoms into a regular crystal lattice structure. Suddenly, the electrons began emerging in regular geometrical patterns that showed they had wavelike properties. Detailed experiments showed the velocities at which the incoming electrons produced outgoing beams. Using the experimental setup, the Bell Labs physicists found the wavelengths. Since the wavelengths of the mechanical waves had been found, and since the velocities of the corresponding electrons were known, it was possible to check de Broglie’s formula. Davisson found that the theory agreed with the experiments within 1 percent to 2 percent accuracy. Davisson retired from Bell Labs in 1946 after 29 years of consistent research. He died on February 1, 1958, at the age of 76.
The field of pea genetics received a boost today with the publication of a resource of pea mutants in Genome Biology. The pea, Pisum sativum, is one of the most famous tools used in genetics: school children today learn that the 19th century monk Gregor Mendel studied the pea – for example, whether the seeds are wrinkled or not – and showed that this and other traits are inherited in a predictable way. Peas have kept many of their other genetic clues secret, however, as they are unsuited to the genetic modification techniques that are commonly used to work with plants. Scientists, led by Abdelhafid Bendahmane, at the French National Agricultural research Institute (INRA) used an early flowering pea cultivar, called Caméor, to study mutant plants at different developmental stages (from seedling through to fruit maturation). The team studied DNA samples from 4,704 plants and identified many essential genes. From this they created a database called UTILLdb, which describes each mutant plant at each developmental stage studied, and incorporates digital images of the plants. UTILLdb contains phenotypic as well as sequence information on mutant genes, and can be searched plant traits of interest. This new tool has implications for both basic science and for crop improvement, and the authors hope that it will fulfill the expectation of crop breeders and scientists who use the pea. The full article was published today in Genome Biology and has received considerable attention in the media. The London Times features both a news item on the science, and a lead editorial celebrating the preeminent role of the humble pea in the progress of scientific understanding.
Everybody knows that nothing in the universe is static. In the Milky Way, billions of stars orbits around the galactic center. Some stars like our sun are pretty consistent keeping a distance of 30,000 light-years from the galactic center completing an orbit every 230M years. This is how to move the sun. This task is not at all an easy task. It is more like a skating rink that is filled with toddlers. Because of this, the galaxy becomes dangerous. About the solar neighborhood Our solar neighborhood is constantly changing with the start which is moving at 100 kilometers per second. The distance of vast objects those act as a protector from the dangers out there is 5 Light Years. But we might get unlucky in the future. At some point in time, you might discover that the star is going supernova or massive objects passing by our Earth which is showering asteroids. By this, you might now understand how to move the sun. If something like this happens we would likely to know about it at least 1000 years in advance if not 2 million years. Moving the solar system Even though after having the information it is not possible for a person to do something. The only possible way to save us from this is by moving the whole solar system out of the way. To move the solar system out of the way, a stellar engine is required. A stellar engine is a megastructure that is used to cover the star through the galaxy. This is possible with the advancement in technology as well as in civilization. Making such things indicates that you are thinking about the future millions of years ahead of time. But how is it possible to move hundreds and thousands of objects that are there in the solar system. Moving the objects of the solar system The good news is it is not required to move each and every object individually. You can simply move the sun and all other objects will start moving with the sun due to the gravitational force. This is the process of how to move the sun begins. All the objects will follow the sun wherever it decides to go. There are a lot of questions regarding what is a stellar engine and how will it work? About the Stellar Engine To understand the study behind the Stellar Engine which is involved in the process of how to move the sun, we will make you understand in two categories which are in a passive method and active method. In the massive method, the simplest kind of stellar engine is known as ShkadowThrusterwhich is a giant mirror. It works the same like the principle of Rocket where the rocket fuel and the photons release radiations and carry momentum. For example, if an asteroid turns its flashlight on then the light would move it back very slowly. A stellar engine will also work in the same way. But it would be better than the slash light of the asteroid. This is because the sun produces a lot of photons which is about 10^45 photons per second. Idea of ShkadowThruster The basic idea of the ShkadowThrusteris to reflect half of the radiation of the solar system to create thrust so that the sun can be shifted to a different place. In order for the ShkadowThruster to work properly, it needs to be kept in the same place as the sun. The sun’s gravity will try to pull it down and the Sun’s radiation will try to balance it. This radiation will try to mirror in a proper position. This means that the mirror needs to be quite light which is made up of micro-thin reflecting foils from materials like aluminum and alloys. Like the mirror The shape of the mirror is also an important factor. Enveloping the sun in the giant circular shell will not work because this will refocus the light back to the sun heating it up and creating a lot of unpleasant problems. Instead of that a parabola shaped mirror is used which helps in sending most of the protons around the sun and in the same direction which also maximizes the thrust. To prevent accidentally burning or freezing of Earth with too much or too little light, the only way to place the ShkadowThruster is to place it over the Sun’s pole. This means that it is possible to move the Sun vertically in the plane of the solar system and in one direction only. This also reduces the trouble options as well. This is how to move the sun process continues. For a civilization that is capable of building such a wonderful system is not complicated but relatively hard to build. Through this, the solar system can probably be moved by light years over 230M years and over a few billions of years, it gives complete control over the sun’s orbit in the galaxy. But in a short time, this process must not be fast enough to stop a deadly supernova. That is why we thought of doing something better. Formation of CaplanThruster We asked our astrophysics friend if he could make something which is bigger than this. We require the fastest stellar engine compared to this. He did and wrote about this in a journal. The name of the new stellar engine is the CaplanThruster. This works a lot like the rocket. It consists of a large space station that gathers matters from the sun to perform the nuclear fusion. This consumes the matter and releases a lot of radiations in the opposite direction that helps in pushing the Sun out of the solar system. And the process of how to move the sun continues. Requirement of fuel While pushing the sun out of the solar system the CaplanThruster requires a lot of fuels which is millions of tonnes per second. This uses the fuel and forms a large electromagnetic field to transfer hydrogen and helium from the solar wind to the engine. The helium is burnt in the explosive fusion reactor. A jet of radioactive oxygen whose temperature is quite high is eliminated. To prevent the engine from just crushing into the sun it needs to balance itself. To do this the jet is first opened and the accelerated hydrogen is taken out from the particle accelerator with the electromagnetic field and put the jet back again in the same position. This helps in balancing the thrust. This engine can move the Sun by 50 light-years. With the CaplanThruster, it will be easier for us to turn the complete solar system.
Our programme supports children to develop: - knowledge about the natural world and how it works; - knowledge about the made world and how it works; - their learning about how to choose, and use, the right tool for a task; - their learning about computers, how to use them and what they can help us to do; - their skills on how to put together ideas about past and present and the links between them; - their learning about their locality and its special features; and - their learning about their own and other cultures. Our lower than average adult-to-child ratios enable us to take the children out on Village Walks, to learn more about their environment and the local community. We use these trips to make observations of animals and plants, and talk about what happens as the seasons change.
If you would like to find out more about vaccines, visit the Vaccine Knowledge Project, a source of independent information about vaccines and infectious diseases run by the Oxford Vaccine Group. How do vaccines work? 23 May 2018 To understand how vaccines work, it helps to look first at how the immune system works, because vaccines harness the natural activity of your immune system. This short animation explains how vaccines enable the body to make the right sort of antibodies to fight a particular disease.
If you’ve ever spent at least one night in a hotel room, at a friend’s house, or anywhere else away from home, you probably know how hard it is to get a good night’s sleep while you’re not in your room. Scientists conducted an experiment and found that this strange phenomenon has a scientific explanation. We were impressed with the scholars’ findings and want to share them with our readers. Scientists Don’t Really Know Why We Need To Sleep Before we get into the experiment, let’s first answer the basic question: why do we need to sleep in the first place? Surprisingly, scholars don’t know exactly why animals and humans need to sleep. Most theories conclude that, among many other possible reasons, sleep is crucial for the restoration of brain cells and muscles. However, regardless of the reasons and effects of sleep, sleep is a very inconvenient process in evolutionary terms. The brain shuts down for several hours, leaving the animal unable to recognize the danger and protect itself. Therefore, animals such as whales and dolphins have developed a super-vigilant sleep system, which is called uni-hemispheric slow-wave sleep, a sleep process in which only one part of the brain rests at a time. And humans have also developed a similar ability. Your Body Reacts To A New Place The phenomenon of struggling to fall asleep or get a good night’s sleep when in a new place is called the first night effect. FNE is a common problem and an area of study among sleep researchers. However, there was no common understanding of why FNE occurs until studies by scientists at Brown University found scientifically explained causes of FNE. It turns out that when you sleep in a new and unfamiliar place, your brain recognizes you as a potentially dangerous environment and doesn’t allow you to fall asleep completely. In other words, we have trouble sleeping because, like dolphins, only one hemisphere of our brain rests when we sleep in a new place. Yuka Sasaki, one of the scientists at Brown University, says that “our brains can have a miniature system of what whales and dolphins have.” The Experiment Proved The Imbalance Of Sleep A team of scientists from Brown University recruited 35 healthy volunteers, invited them to spend 2 nights in a laboratory equipped with a week-long interval between nights, and investigated their brain activity. They found an asymmetry in the depth of sleep between the left and right parts of the brain. The left hemisphere was not as sleepy as the right. It was more vulnerable and sensitive to strange (and therefore potentially dangerous) sounds. A week later, during the second night in the lab, the depth of sleep between the hemispheres was much more symmetrical. Researchers Shared How To Overcome FNE Although FNE, as a phenomenon, is quite interesting, it can cause many problems for those who constantly experience it. Lack of sleep can cause problems like obesity, high blood pressure, or diabetes; however, it is not so desperate. Scholars say that our brains can be trained to resist FNE when we experience it frequently. Yuka Sasaki explains it this way: “Human brains are very flexible.” Sleep researchers have come up with a number of practical tricks to help you overcome NEF and get a good night’s sleep, no matter where you spend the night. Since the FNE is linked to sleeping in an unfamiliar place, your goal is to make the place resemble your bedroom. Bring something familiar: your pillow, your favorite pajamas, or a hot drink that you usually prepare before going to bed. Maintain your normal sleep routine: try to go to bed at exactly the same time you usually go to bed and follow the usual routine rituals you follow at home. Make the environment as familiar as possible: for example, try to book a hotel room with the same bed size you have at home; If you have double beds at home, it might be strange to sleep in a single one. Have you experienced the effect of the first night? How did you fight it?
What is bird flu? Bird flu, also known as avian flu, is a viral infection that spread among birds. Most infected birds die of bird flu. Some kinds of bird flu are just restricted to birds, but unfortunately, some can affect humans and other animals, causing a great number of deaths. Examples of these kinds are H5N1 and H7N9. Infected birds may be very hard to be realized by human eyes, as birds do not always get sick from infection, even some is still looking healthy. How common is bird flu? People may get bird flu by having close contact with infected birds or bird droppings. This means everyone in any ages, any genders has a risk of getting bird flu. Since the first human case in 1997, H5N1 has killed nearly 60% of people who have been infected. What are the symptoms of bird flu? If you have some symptoms below, you should be carefully in case that you may get bird flu. - Respiratory difficulties - High temperature - Muscle aches - Runny nose - Sore throat There may be some symptoms not listed above. If you have any concerns about a symptom, please consult your doctor. When should I see my doctor? If you have any signs or symptoms listed above or have any questions, please consult with your doctor. As the symptoms of bird flu are similar to normal types of flu, people usually misunderstand about their condition, which misses the best time to be treated. What causes bird flu? Bird flu occurs naturally in wild waterfowl, and can spread quickly to domestic poultry. The disease is transmitted to humans when they contact with infected bird feces, nasal secretions, or secretions from the mouth or eyes. Properly cook poultry or eggs from infected birds does not transmit the bird flu, but eggs should never be served runny. Meat is considered safe if it has been cooked to an internal temperature of 80-90˚C. To ensure your safety, you should use tested meat and eggs. H5N1 virus can survive for extended periods of time. Birds infected with H5N1 continue to release the virus in feces and saliva for as long as 10 days. What increases my risk for bird flu? There are many risk factors for bird flu, especially if you: - Raise poultry - Turn back from affected areas - Contact to injected birds - Eat undercooked poultry or eggs - Take care of infected patients - Live with an infected person Diagnosis & treatment The information provided is not a substitute for any medical advice. ALWAYS consult with your doctor for more information. How is bird flu diagnosed? The test for bird flu is called influenza A/H5 virus real-time RT-PCR primer and probe set. This test includes the following steps: - Auscultation (a test that detects abnormal breath sounds) - White blood cell differential - Nasopharyngeal culture - Chest x-ray How is bird flu treated? The medications must be given within 48 hours after symptoms first appear to gain the best result. As there are some types of bird flu, the treatments also vary depending on the symptoms you have. The most common medications for bird flu are oseltamivir (Tamiflu) or zanamivir (Relenza). The patients need to be under the doctor’s observation during the treatment. Your family or others in close contact with you might also be prescribed antivirals as a preventive measure, even if they have no symptoms. You will be placed in isolation to avoid spreading the virus to others. Lifestyle changes & home remedies What are some lifestyle changes or home remedies that can help me manage bird flu? To minimize the risk of infection, you should avoid: - Contacting with infected birds. - Consuming undercooked poultry and eggs. - Buying meats from open-air markets. If you have any questions, please consult with your doctor to better understand the best solution for you. Hello Health Group does not provide medical advice, diagnosis or treatment. Frequently Asked Questions About Bird Flu http://www.webmd.com/cold-and-flu/what-know-about-bird-flu. Accessed July 20, 2016. Bird Flu http://www.healthline.com/health/avian-influenza#Overview1. Accessed July 20, 2016. Review Date: February 3, 2017 | Last Modified: February 3, 2017
Blandford's foxes are small foxes with large ears and long, bushy tails with long, dark guard hairs. They range in mass from 1.5 to 3 kg, and in head to tail length from 70 to 90 cm (tail mean length is 323 mm, body mean length is 426 mm. Males and females are similar in appearance. The snout is slender. (Nowak, 1999; Yom-Tov and Geffen, 1999)has cat-like movements and appearance. Coloration is black, brown, or grey, and is sometimes blotchy. The flanks are lighter than the back, which has a black stripe running down it, and the underside is yellow. The tip of the tail is usually dark but can be white. Males have 3 to 6% longer forelegs and bodies than females. Blanford's foxes typically mate from December to February. They are strictly monogamous. The gestation period is 50 to 60 days, after which the female gives birth to a litter of 1 to 3 kits. The altricial young are nursed for 30 to 45 days. Young become sexually mature between 8 and 12 months of age. (Geffen, et al., 1992; Nowak, 1999; Yom-Tov and Geffen, 1999) Females nurse their young for 30 to 45 days. Young are dependent on their mothers until they can forage on their own. Foxes have relatively altricial young, and usually give birth to them in a secluded den, where they can develop under the care of their mother. Because the mating system of Blandford's foxes is monogamous, and breeding pairs maintain minimally overlapping ranges, the male may also be considered to provide some care to the offspring, even if only in the form of maintaining an area from which food is supplied. Males have been observed grooming juveniles. Young remain in their natal range until the October or November in the year of their birth. (Nowak, 1999) Average lifespan of Blandford's foxes is 4 to 5 years, and does not exceed 10 years in the wild. Old age and rabies are the primary recorded causes of mortality. (Yom-Tov and Geffen, 1999) Blanford's foxes are strictly nocturnal, solitary hunters. They do not exhibit a change in their daily activity with season. They generally become active soon after dusk and are active throughout the night. (Geffen, et al., 2005; Geffen, et al., 1992; Nowak, 1999) In Israel Blanford's foxes occur at population densities up to 2 per square kilometer. They are one of the few fox species to regularly climb, scaling cliffs with ease. Their especially long tail is used as a counter balance when jumping and climbing. (Geffen, et al., 2005) Foraging home range averaged 1.1 square kilometers, plus or minus 0.7 square kilometers. Monogamous pairs occupy territories of 1.6 square kilometers, with little overlap between territories. (Geffen and MacDonald, 1992) Like other canids, Blanford's foxes have keen eyesight, sense of smell, and hearing. They communicate with chemical cues and with vocalizations. Blanford's foxes are omnivorous, eating mostly insects and fruit. Prey includes insects such as beetles, locusts, grasshopper, ants, and termites. Primary wild fruits eaten are two species of caperbush (Capparis cartilaginea and Capparis spinosa), Phoenix dactylifera, Ochradenus baccatus, Fagonia mollis, and Graminea species. Fecal samples have up to 10% vertebrate remains as well. In Pakistan they have been recorded eating agricultural crops, including melons, grapes, and Russian olives. (Geffen, et al., 2005; Geffen, et al., 1992; Nowak, 1999) Blanford's foxes hunt alone the majority of time. Even mated pairs tend to forage independently. They rarely cache food. (Geffen, et al., 2005) The main predator of these foxes is humans, although one case of a Blanford's fox being killed by a red fox (Vulpes vulpes) has been recorded. Blanford's foxes are not hard to catch, showing little fear of traps or humans. (Geffen, et al., 2005; Yom-Tov and Geffen, 1999) Blanford's foxes help to control rapidly growing small mammal populations by preying on mammals such as rodents. They may have a similar effect on insect populations. Because they are frugivorous, they likely play some role in dispersing seeds. (Geffen, et al., 1992; Yom-Tov and Geffen, 1999) The pelts of Blanford's foxes are valuable and they are hunted. Because of their diet, this species probably controls rodent and insect populations which might have a negative impact on crops. (Yom-Tov and Geffen, 1999) Blanford's foxes cause domestic crop damage in some areas. (Geffen and MacDonald, 1992) Trapping and hunting have caused a large decline in the numbers of these foxes. They are protected throughout Israel, as the majority of their habitat is in protected areas. Development in other parts of their range may pose a risk to populations. (Nowak, 1999) Mitochondrial DNA evidence suggests that Blanford's foxes and fennec foxes are sister taxa. (Geffen, et al., 2005) Tanya Dewey (editor), Animal Diversity Web. Marty Heiser (author), University of Wisconsin-Stevens Point, Chris Yahnke (editor), University of Wisconsin-Stevens Point. living in sub-Saharan Africa (south of 30 degrees north) and Madagascar. living in the northern part of the Old World. In otherwords, Europe and Asia and northern Africa. uses sound to communicate living in landscapes dominated by human agriculture. young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. uses smells or other chemicals to communicate having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect. in deserts low (less than 30 cm per year) and unpredictable rainfall results in landscapes dominated by plants and animals adapted to aridity. Vegetation is typically sparse, though spectacular blooms may occur following rain. Deserts can be cold or warm and daily temperates typically fluctuate. In dune areas vegetation is also sparse and conditions are dry. This is because sand does not hold water well so little is available to plants. In dunes near seas and oceans this is compounded by the influence of salt in the air and soil. Salt limits the ability of plants to take up water through their roots. animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes). Having one mate at a time. having the capacity to move from one place to another. This terrestrial biome includes summits of high mountains, either without vegetation or covered by low, tundra-like vegetation. the area in which the animal is naturally found, the region in which it is endemic. active during the night an animal that mainly eats all kinds of things, including plants and animals breeding is confined to a particular season remains in the same area reproduction that includes combining the genetic contribution of two individuals, a male and a female uses touch to communicate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). Living on the ground. A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. uses sight to communicate reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female. Geffen, E., R. Hefner, P. Wright. 2005. "Blanford's fox (Vulpes cana)" (On-line). IUCN Canid Specialist Group. Accessed September 27, 2007 at http://www.canids.org/species/Vulpes_cana.htm. Geffen, E., D. MacDonald. 1993. Activity and Movement Patterns of Blandford's Foxes.. Journal of Mammalogy, 74(2): 455-463. Geffen, E., D. MacDonald. 1992. Small Size and Monogomy: Spatial Organization of Blandford's Foxes, *Vulpes cana*. Animal Behaviour, 44: 1123-1130. Geffen, E., H. Reuven, D. MacDonald, M. Ucko. 1992. Diet and Foraging behavior of Blandford's Foxes, *Vulpes cana*, In Israel. Journal of Mammalogy, 73(2): 395-402. Nowak, R. 1999. Walker's Mammals of the World. Baltimore and London: John Hopkins University Press. Yom-Tov, Y., E. Geffen. 1999. "IUCN Canid Specialist Group" (On-line). Accessed September 15, 2001 at http://www.canids.org/SPPACCTS/vcana.htm.
| || | Todat we began reading one of my favourite picture books, Eve Bunting's "A Day's Work". Bunting's work in general appeals to me because she is not afraid to address challenging themes like poverty, illiteracy and immigration, and she does so powerfully and with great effectiveness. Her stories are beautifully told, and excellent for use with elementary school students, because they provide foundation for "grand conversations" in the classroom. We began by looking at a mess of vocabulary I had pulled out of the book (immigrant, chickweed, replanted, lied, extra food, parking lot, etc.), and made predictions about the story by using as many of the words as possible in a paragraph. Later this week, we'll revisit the book, and discuss the main ideas and author's message in more detail. Then I'll have Alex and Simon respond to one or more of the following in a blog post: - At first, when Ben came back and saw the mistake Francesco and his Grandfater had made, he was very angry. Why do you think his feeling changed after Francesco and his Grandfather talked to him? - At the end of the story, Ben says that "Grandpa already knows the important things". What does he mean by this? Why are these things "important"? - How does Francesco learn his lesson about "the important things"? - Have you ever learned a hard lesson? What important things you learned from your hard lesson? If you're looking for a great book to teach honest and integrity in a current, relevant setting, I highly reccommend "A Day's Work" for your home or classroom!
Plastics provide the world with a material that can be […] Plastics provide the world with a material that can be tested with little effort and time. Plastic is a material with a high viscosity which, when hardened, exhibits metallic properties. Plastics can be molded in any form to achieve durable and lightweight products. Plastic cups are the common vision of most individuals. The present invention has brought about great changes in the design and implementation of the process for manufacturing fast moving consumer goods. Commodities can be conveniently stored in plastic cups for several days without damage. They are a convenient alternative to our fragile and sometimes expensive glassware used in homes and offices. Plastic glasses are manufactured at the factory and are usually manufactured in volume to reduce the selling price. The raw plastic was taken to the factory, where it was initially treated with dirt and other harmful particles. This plastic is raw and requires chemical changes to make it a viscous material needed to produce the final product. Once the plastic has been processed, the entire batch is placed in the burner where the heating process takes place. This process involves high temperature and precise time. If the plastic is excessively heated, it may burn and be unsuitable for use, on the other hand, it will be difficult if it is not heated sufficiently. Once the plastic has been heated and reached its molten state, the molding process begins. Thousands of prefabricated molds are run along the assembly line, in which the molten plastic is cast in each mold. It was left to dry and the final product was ready. The use of various shapes, colors and designs is common for the production of plastic cups. Dyes are added to the molten plastic to give the cups a different color. Plastic cups can be purchased at the local store, but it is recommended to buy recycled plastic cups to the greatest environmental benefit.
Students should have opportunities to plan and carry out several different kinds of investigations during their K-12 years. At all levels, they should engage in investigations that range from those structured by the teacher—in order to expose an issue or question that they would be unlikely to explore on their own (e.g., measuring specific properties of materials)— to those that emerge from students’ own questions. (NRC Framework, 2012, p. 61) chemical equilibrium - a state that is reached when the rate of the forward reaction equals the rate of the reverse reaction. dynamic equilibrium - a state of equilibrium in which two opposing processes occur simultaneously with no net change. equilibrium - a condition in which all acting influences are canceled by others, resulting in a stable, balanced, or unchanging system. equilibrium constant (ksubeq) - the ratio of the concentration of products divided by the concentration of reactants. Le Chatelier's Principle - a principle that states that when the equilibrium of a system is disturbed or stressed, the system adjusts to reestablish equilibrium by minimizing or countering the stress. static equilibrum - a state of equilibrium in which no movement occurs. The Haber Process - the process of synthesizing ammonia from nitrogen and hydrogen gases. Georgia Standards of Excellence SC2Obtain, evaluate, and communicate information about the chemical and physical properties of matter resulting from the ability of atoms to form bonds. SC2.gDevelop a model to illustrate the release or absorption of energy (endothermic or exothermic) from a chemical reaction system depends upon the changes in total bond energy. SC4Obtain, evaluate, and communicate information about how to refine the design of a chemical system by applying engineering principles to manipulate the factors that affect a chemical reaction. SC4.aPlan and carry out an investigation to provide evidence of the effects of changing concentration, temperature, and pressure on chemical reactions. (Clarification statement: Pressure should not be tested experimentally.) SC4.bConstruct an argument using collision theory and transition state theory to explain the role of activation energy in chemical reactions. (Clarification statement: Reaction coordinate diagrams could be used to visualize graphically changes in energy (direction flow and quantity) during the progress of a chemical reaction.) SC4.dRefine the design of a chemical system by altering the conditions that would change forward and reverse rates and the amount of products at equilibrium. (Clarification statement: Emphasis is on the application of LeChatelier’s principle.) Request Teacher Toolkit The Chemistry Matters teacher toolkit provides instructions and answer keys for labs, experiments, and assignments for all 12 units of study. GPB offers the teacher toolkit at no cost to Georgia educators. Complete and submit this form to request the teacher toolkit. You only need to submit this form one time to get materials for all 12 units of study.
In mathematics, it is often difficult to work with large or complex numbers. When you don't need an accurate answer but just an estimation, rounding is a useful practice. Rounding makes numbers easier to work with by reducing the digits in the number while keeping the value similar to the original number. You can round a number to any place value depending on how much you want to change the original value of the number. You can use the rounded number in a math problem to get an approximate answer. Underline the digit you plan to round Determine which place value you are going to round the number to. Underline the digit in that place value position. For example, if you want to round to the nearest hundred, underline the digit in the hundreds place. When rounding the number 2,365 to the nearest hundred, underline the 3 because it is in the hundreds place. Consult the digit to the right of the underlined digit Look at the digit to the right of your underlined numeral. Determine if it is greater than or equal to 5. If so, you will round your underlined digit up. If the digit to the right of your underlined numeral is less than 5, you will round your number down. In the example 2,365, look at the digit to the right of the hundreds place, which is 6. Since this is greater than 5, you will round up. Sciencing Video Vault Round up when the digit to the right is 5 or greater When rounding up, add 1 to your underlined numeral and then change all of the digits to the right of the underlined numeral to zeros. In the example 2,365, you will change the 3 to a 4 and change the 6 and 5 to zeros, so your rounded number would be 2,400. Round down when the digit to the right is less than 5 When rounding down, the underlined numeral stays the same and all of the digits to the right of it change to zeros. For example, to round the number 4,623 to the nearest hundred, your result will be 4,600 because the digit to the right of the hundreds place is less than 5. Make sure you underline the correct place value in your number when rounding, especially when working with decimals. If you must round to the nearest hundredth, your result will be very different from the result of rounding to the nearest hundred.
This article needs additional citations for verification. (August 2014) (Learn how and when to remove this template message) |Part of a series on| Classical Marxism refers to the economic, philosophical and sociological theories expounded by Karl Marx and Friedrich Engels as contrasted with later developments in Marxism, especially Leninism and Marxism–Leninism. Karl Marx (5 May 1818, Trier, Germany – 14 March 1883, London) was an immensely influential German philosopher, sociologist, political economist and revolutionary socialist. Marx addressed a wide range of issues, including alienation and exploitation of the worker, the capitalist mode of production and historical materialism, although he is most famous for his analysis of history in terms of class struggles, summed up in the opening line of the introduction to The Communist Manifesto: "The history of all hitherto existing society is the history of class struggles". The influence of his ideas, already popular during his life, was given added impetus by the victory of the Russian Bolsheviks in the 1917 October Revolution and there are few parts of the world which were not significantly touched by Marxian ideas in the course of the twentieth century. As the American Marx scholar Hal Draper remarked: "[T]here are few thinkers in modern history whose thought has been so badly misrepresented, by Marxists and anti-Marxists alike". Marx studied under one of Hegel's pupils, Bruno Bauer, a leader of the circle of Young Hegelians to whom Marx attached himself. However, from 1841 he and Engels came to disagree with Bauer and the rest of the Young Hegelians about socialism and also about the usage of Hegel's dialectic and progressively broke away from German idealism and the Young Hegelians. Marx's early writings are thus a response towards Hegel, German idealism and a break with the rest of the Young Hegelians. Marx, "stood Hegel on his head", in his own view of his role by turning the idealistic dialectic into a materialistic one, in proposing that material circumstances shape ideas instead of the other way around. In this, Marx was following the lead of Feuerbach. His theory of alienation, developed in the Economic and Philosophical Manuscripts of 1844 (published in 1932), inspired itself from Feuerbach's critique of the alienation of Man in God through the objectivation of all his inherent characteristics (thus man projected on God all qualities which are in fact man's own quality which defines the "human nature"). But Marx also criticized Feuerbach for being insufficiently materialistic. English and Scottish political economy Marx built on and critiqued the most well-known political economists of his day, the British classical political economists. Marx critiqued Smith and Ricardo for not realizing that their economic concepts reflected specifically capitalist institutions, not innate natural properties of human society and could not be applied unchanged to all societies. He proposed a systematic correlation between labour-values and money prices. He claimed that the source of profits under capitalism is value added by workers not paid out in wages. This mechanism operated through the distinction between "labour power", which workers freely exchanged for their wages; and "labour", over which asset-holding capitalists thereby gained control. This practical and theoretical distinction was Marx's primary insight, and allowed him to develop the concept of "surplus value", which distinguished his works from that of Smith and Ricardo. Rousseau was one of the first modern writers to seriously attack the institution of private property and is sometimes considered a forebear of modern socialism and communism, though Marx rarely mentions Rousseau in his writings. In 1833, France was experiencing a number of social problems arising out of the Industrial Revolution. A number of sweeping plans of reform were developed by thinkers on the political left. Among the more grandiose were the plans of Charles Fourier and the followers of Saint-Simon. Fourier wanted to replace modern cities with utopian communities while the Saint-Simonians advocated directing the economy by manipulating credit. Although these programs did not have much support, they did expand the political and social imagination of Marx. Louis Blanc is perhaps best known for originating the social principle, later adopted by Marx, of how labor and income should be distributed: "From each according to his abilities, to each according to his needs". Pierre-Joseph Proudhon participated in the February 1848 uprising and the composition of what he termed "the first republican proclamation" of the new republic, but he had misgivings about the new government because it was pursuing political reform at the expense of the socio-economic reform, which Proudhon considered basic. Proudhon published his own perspective for reform, Solution du problème social, in which he laid out a program of mutual financial cooperation among workers. He believed this would transfer control of economic relations from capitalists and financiers to workers. It was Proudhon's book What Is Property? that convinced the young Karl Marx that private property should be abolished. Other influences on Marx Marx's revision of Hegelianism was also influenced by Engels' book The Condition of the Working Class in England in 1844, which led Marx to conceive of the historical dialectic in terms of class conflict and to see the modern working class as the most progressive force for revolution. Marx was influenced by Antique materialism, especially Epicurus (to whom Marx dedicated his thesis, The Difference Between the Democritean and Epicurean Philosophy of Nature, 1841) for his materialism and theory of clinamen which opened up a realm of liberty. Giambattista Vico propounded a cyclical theory of history, according to which human societies progress through a series of stages from barbarism to civilization and then return to barbarism. In the first stage—called the Age of the Gods—religion, the family and other basic institutions emerge; in the succeeding Age of Heroes, the common people are kept in subjection by a dominant class of nobles; in the final stage—the Age of Men—the people rebel and win equality, but in the process society begins to disintegrate. Vico's influence on Marx is obvious. Marx drew on Lewis H. Morgan and his social evolution theory. He wrote a collection of notebooks from his reading of Lewis Morgan, but they are regarded as being quite obscure and only available in scholarly editions. (However Engels is much more noticeably influenced by Morgan than Marx). Friedrich Engels (28 November 1820, Wuppertal, Prussia – 5 August 1895, London) was a 19th-century German political philosopher. He developed communist theory alongside his better-known collaborator, Karl Marx. In 1842, his father sent the young Engels to England to help manage his cotton factory in Manchester. Shocked by the widespread poverty, Engels began writing an account which he published in 1845 as The Condition of the Working Class in England in 1844 (). In July 1845, Engels went to England, where he met an Irish working-class woman named Mary Burns (Crosby), with whom he lived until her death in 1863 (Carver 2003:19). Later, Engels lived with her sister Lizzie, marrying her the day before she died in 1877 (Carver 2003:42). These women may have introduced him to the Chartist movement, of whose leaders he met several, including George Harney. Engels actively participated in the Revolution of 1848, taking part in the uprising at Elberfeld. Engels fought in the Baden campaign against the Prussians (June/July 1849) as the aide-de-camp of August Willich, who commanded a Free Corps in the Baden-Palatinate uprising. Marx and Engels Marx and Engels first met in person in September 1844. They discovered that they had similar views on philosophy and on capitalism and decided to work together, producing a number of works including Die heilige Familie (The Holy Family). After the French authorities deported Marx from France in January 1845, Engels and Marx decided to move to Belgium, which then permitted greater freedom of expression than some other countries in Europe. Engels and Marx returned to Brussels in January 1846, where they set up the Communist Correspondence Committee. In 1847, Engels and Marx began writing a pamphlet together, based on Engels' The Principles of Communism. They completed the 12,000-word pamphlet in six weeks, writing it in such a manner as to make communism understandable to a wide audience and published it as The Communist Manifesto in February 1848. In March, Belgium expelled both Engels and Marx. They moved to Cologne, where they began to publish a radical newspaper, the Neue Rheinische Zeitung. By 1849, both Engels and Marx had to leave Germany and moved to London. The Prussian authorities applied pressure on the British government to expel the two men, but Prime Minister Lord John Russell refused. With only the money that Engels could raise, the Marx family lived in extreme poverty. The contributions of Marx and Engels to the formation of Marxist theory have been described as inseparable. Marx's main ideas included: - Alienation: Marx refers to the alienation of people from aspects of their "human nature" ("Gattungswesen", usually translated as "species-essence" or "species-being"). He believed that alienation is a systematic result of capitalism. Under capitalism, the fruits of production belong to the employers, who expropriate the surplus created by others and in so doing generate alienated labour. Alienation describes objective features of a person's situation in capitalism—it is not necessary for them to believe or feel that they are alienated. - Base and superstructure: Marx and Engels use the “base-structure” concept to explain the idea that the totality of relations among people with regard to “the social production of their existence” forms the economic basis, on which arises a superstructure of political and legal institutions. To the base corresponds the social consciousness which includes religious, philosophical and other main ideas. The base conditions both, the superstructure and the social consciousness. A conflict between the development of material productive forces and the relations of production causes social revolutions and the resulting change in the economic basis will sooner or later lead to the transformation of the superstructure. For Marx, this relationship is not a one way process—it is reflexive and the base determines the superstructure in the first instance at the same time as it remains the foundation of a form of social organization which is itself transformed as an element in the overall dialectical process. The relationship between superstructure and base is considered to be a dialectical one, ineffable in a sense except as it unfolds in its material reality in the actual historical process (which scientific socialism aims to explain and ultimately to guide). - Class consciousness: class consciousness refers to the awareness, both of itself and of the social world around it, that a social class possesses and its capacity to act in its own rational interests based on this awareness. Thus class consciousness must be attained before the class may mount a successful revolution. However, other methods of revolutionary action have been developed, such as vanguardism. - Exploitation: Marx refers to the exploitation of an entire segment or class of society by another. He sees it as being an inherent feature and key element of capitalism and free markets. The profit gained by the capitalist is the difference between the value of the product made by the worker and the actual wage that the worker receives—in other words, capitalism functions on the basis of paying workers less than the full value of their labor in order to enable the capitalist class to turn a profit. - Historical materialism: historical materialism was first articulated by Marx, although he himself never used the term. It looks for the causes of developments and changes in human societies in the way in which humans collectively make the means to life, thus giving an emphasis through economic analysis to everything that co-exists with the economic base of society (e.g. social classes, political structures, ideologies). - Means of production: the means of production are a combination of the means of labor and the subject of labor used by workers to make products. The means of labor include machines, tools, equipment, infrastructure and "all those things with the aid of which man acts upon the subject of labor, and transforms it". The subject of labor includes raw materials and materials directly taken from nature. Means of production by themselves produce nothing— labor power is needed for production to take place. - Ideology: without offering a general definition for "ideology", Marx on several instances has used the term to designate the production of images of social reality. According to Engels, “ideology is a process accomplished by the so-called thinker consciously, it is true, but with a false consciousness. The real motive forces impelling him remain unknown to him; otherwise it simply would not be an ideological process. Hence he imagines false or seeming motive forces”. Because the ruling class controls the society's means of production, the superstructure of society as well as its ruling ideas will be determined according to what is in the ruling class's best interests. As Marx said famously in The German Ideology, “the ideas of the ruling class are in every epoch the ruling ideas, i.e. the class which is the ruling material force of society, is at the same time its ruling intellectual force”. Therefore the ideology of a society is of enormous importance since it confuses the alienated groups and can create false consciousness such as commodity fetishism (perceiving labor as capital—a degradation of human life). - Mode of production: the mode of production is a specific combination of productive forces (including human the means of production and labour power, tools, equipment, buildings and technologies, materials and improved land) and social and technical relations of production (including the property, power and control relations governing society's productive assets, often codified in law, cooperative work relations and forms of association, relations between people and the objects of their work and the relations between social classes). - Political economy: the term "political economy" originally meant the study of the conditions under which production was organized in the nation-states of the new-born capitalist system. Political economy then studies the mechanism of human activity in organizing material and the mechanism of distributing the surplus or deficit that is the result of that activity. Political economy studies the means of production, specifically capital and how this manifests itself in economic activity. Marx's concept of class This section needs additional citations for verification. (May 2012) (Learn how and when to remove this template message) Marx believed that class identity was configured in the relations with the mode of production. In other words, a class is a collective of individuals who have a similar relationship with the means of production (as opposed to the more common-sense idea that class is determined by wealth alone, i.e. high class, middle class and poor class). Marx describes several social classes in capitalist societies, including primarily: - The proletariat: "those individuals who sell their labor power, (and therefore add value to the products), and who, in the capitalist mode of production, do not own the means of production". According to Marx, the capitalist mode of production establishes the conditions for the bourgeoisie to exploit the proletariat due to the fact that the worker's labor power generates an added value greater than his salary. - The bourgeoisie: those who "own the means of production" and buy labor power from the proletariat, who are recompensed by a salary, thus exploiting the proletariat. The bourgeoisie may be further subdivided into the very wealthy bourgeoisie and the petty bourgeoisie. The petty bourgeoisie are those who employ labor, but may also work themselves. These may be small proprietors, land-holding peasants, or trade workers. Marx predicted that the petty bourgeoisie would eventually be destroyed by the constant reinvention of the means of production and the result of this would be the forced movement of the vast majority of the petty bourgeoisie to the proletariat. Marx also identified the lumpenproletariat, a stratum of society completely disconnected from the means of production. Marx also describes the communists as separate from the oppressed proletariat. The communists were to be a unifying party among the proletariat; they were educated revolutionaries who could bring the proletariat to revolution and help them establish the democratic dictatorship of the proletariat. According to Marx, the communists would support any true revolution of the proletariat against the bourgeoisie. Thus the communists aide the proletariat in creating the inevitable classless society (Vladimir Lenin takes this concept a step further by stating that only "professional revolutionaries" can lead the revolution against the bourgeoisie). Marx's theory of history The Marxist theory of historical materialism understands society as fundamentally determined by the material conditions at any given time—this means the relationships which people enter into with one another in order to fulfill their basic needs, for instance to feed and clothe themselves and their families. In general, Marx and Engels identified five successive stages of the development of these material conditions in Western Europe. - The Three Sources and Three Component Parts of Marxism by Vladimir Lenin at the Marxists Internet Archive. - "MSN Encarta – France". Archived from the original on 2009-10-31. - "MSN Encarta – Communism". Archived from the original on 2009-10-31. - Proudhon, Pierre-Joseph (2007). What is property?. New York, NY: Cosimo Inc. ISBN 9781602060944. - "MSN Encarta - Giambattista Vico". Archived from the original on 2009-10-31. - The Campaign for the German Imperial Constitution. - For example see Franz Mehring: “The more their thought and their development became one, the more they each remained a separate entity and a man”, in Mehring: Karl Marx: The Story of His Life (1918), Chapter 8 Marx and Engels 2. An Incomparable Alliance - A Dictionary of Sociology, Article: Alienation - See Marx: A Contribution to the Critique of Political Economy (1859), Preface, Progress Publishers, Moscow, 1977, with some notes by R. Rojas, and Engels: Anti-Dühring (1877), Introduction General - Institute of Economics of the Academy of Sciences of the U.S.S.R. (1957). xiii. - Joseph McCarney: Ideology and False Consciousness, April 2005 - Engels: Letter to Franz Mehring, (London July 14, 1893), transl. by Donna Torr, in Marx and Engels Correspondence, International Publishers 1968 - Karl Marx, The German Ideology - The Communist Manifesto 80-87 - See in particular Marx and Engels, The German Ideology - Marx makes no claim to have produced a master key to history. Historical materialism is not "an historico-philosophic theory of the marche generale imposed by fate upon every people, whatever the historic circumstances in which it finds itself". (Marx, Karl, Letter to editor of the Russian paper Otetchestvennye Zapiskym, 1877) His ideas, he explains, are based on a concrete study of the actual conditions that pertained in Europe.
What are health disparities? A health disparity is defined as a higher burden of illness, injury, disability, or morality experienced by one population group in relation to another. Racial differences in socioeconomic status, residential conditions and medical care are some examples of important contributors to racial differences in disease. How do health disparities differ from health care disparities? A health care disparity refers to differences in health coverage, access, or quality of care that is not due to health needs. Why do racial/ethnic disparities exist in the health care system? There is no single, simple answer. Racial and ethnic minorities tend to receive lower-quality health care than Whites do, even when insurance status, income, age, and severity of conditions are comparable, says a 2002 report of the Institute of Medicine. Among the better-controlled studies performed to assess the reasons why – some reason have been fount to be: healthcare delivery systems and access to health care (cultural/linguistic barriers, system fragmentation and incentives to physicians to limit services), physician biases, patient perceptions and clinical uncertainty when interacting with patients of color. Why do health disparities exist among African Americans? Although we do know health disparities exist, we cannot precisely say why they exist. GRAAHI was established to explore the causes of these disparities and will work to provide strategies to eliminate them by advancing understanding of the development and progression of diseases that contribute to them. Cited causes of health disparities are numerous and include: lack of insurance, access to healthcare, poverty, education levels, cultural differences, language barriers, access to transportation, racism, environmental risks and differences in individual and community support. Why was a health institute established to study and improve health care only among African Americans and not all minority groups? While health disparities do exist among other minority groups, the disparities among the black population are so alarming that an intentional effort, focused solely on the needs of African Americans, is important. Staggering data from the Kent County Health Department and other national health organizations indicates disproportionate rates of preventable illness, chronic disease, and premature death among African Americans.
Decision making is not limited to animals like humans or birds. Bacteria also make decisions with intricate precision. Imagine being so tiny that you are literally moved by water molecules bumping into you. This is what bacteria encounter perpetually. Now, imagine having no eyes, no ears, no sense of touch, no taste or nose. How would you know what or who was around you? How would you find food now as compared to where you were a short time ago? This is where being able to sense important things like a food source is critical. Bacteria have this on their “mind” all the time. Depending on the size of a bacterium’s genome, these tiny organisms have the ability to sense hundreds to thousands of internal and external signals like carbon sources, nitrogen sources, and pH changes. If these bacteria are motile (able to move around), they can compare how conditions are for them now against how they were a few seconds ago. That’s right, bacteria have a memory albeit short. If conditions are better, they can continue to move in a forward direction. If conditions are worse compared to a few seconds earlier, they can change direction and continue searching for better conditions in their environment to generate energy. But, how do they decide? I will focus on a lesser known bacterium as my example since I have the most knowledge about it. Azospirillum brasilense is found in the soil around the world and interacts with the roots of cereal plants like corn and wheat. A. brasilense is almost always (except when attached to plant roots) motile and searching for the best niche to provide energy for the cell. This bacterium can “make” its own usable form of nitrogen from nitrogen gas in the air through a process known as nitrogen fixation. This costs the cell a lot of energy so they are searching for nitrogen sources as well as the necessary carbon sources for life. The microscopic world can be cut throat. Having the ability to sense a greater variety of food compounds could mean the difference between being the predominant species in town or being on the fringe. Back to the question about how these cells decide which direction to travel. One way is through a dedicated group of proteins that regulate how often the cell switches direction. This group of proteins control chemotaxis, the movement of a cell in response to chemicals within their environment. The number of chemotaxis genes varies depending on the complexity of metabolism for a bacterium. The champion at the moment is 129 fromPseudomonas syringae pv. oryzae str. 1_6. The proteins that actually sense the chemical signal are called methyl-accepting chemotaxis proteins (MCPs) or chemoreceptors. Azosprillum brasilense has 48 MCPs within its genome. This does not mean, however, that A. brasilense cells can only sense 48 different chemicals. Most, but not all, of these MCPs don’t interact with the chemicals themselves but sense the changes in the amount of energy the cell has within the environment they reside. If things are good, the MCPs are inactive. However, if energy levels are lower than they were a few seconds before, the MCPs become active and begin the signal to change direction. And these MCPs are VERY sensitive to changes. For example, if the A. brasilense cells are swimming in a liquid medium with 1,000 molecules of sugar, they will detect changes of addition or removal of a few sugar molecules in the medium. Now, move these cells immediately into a medium with 1,000,000,000,0000 sugar molecules and they still will be able to detect removal or addition of a few sugar molecules. This is called adaptation and allows the cells to remain sensitive no matter the concentrations of compounds they encounter. MCPs by themselves would be useless if they did not interact with some other cellular machinery. Lucky for bacteria, the MCPs are only the first of many specialized proteins for regulating direction of travel. If a chemical binds to an MCP outside the cell, how does the inside of the cell get the message? When chemicals bind to an MCP or stop binding to it, it slightly changes the structure of the MCP. It is thought association or dissociation of chemicals to MCPs causes a rotation like a slight turn of a door knob. Depending on if the MCPs are rotated or not changes the activity of another enzyme that interacts with MCPs, a protein called CheA (pronounced, ‘key A’). When CheA is active, it converts two other proteins into an active form, CheB and CheY. Confused yet? When CheY is in its active form, it interacts with the base of the flagellumand causes the flagellum to switch direction of rotation and causes the bacterium to change the direction it is moving. The take home message for the mechanism is this: when the MCPs are not interacting with chemicals (nutrients), CheA and CheY are active, and the flagellum switches rotation to make the cell go in a different direction. Hopefully for the cell, the new direction it is traveling will have more nutrients that will interact with the MCPs and block more changes in direction by CheA/CheY activity. You might ask, if chemical compounds are always bound to the MCPs, what if the cell begins going to passed the best environment and needs to turn around? Good question. That is where the other protein activated by CheA comes in, CheB. What makes this system unique is the ability to adapt to current conditions (nutrient chemical levels) so the cell can respond to new information. A set of protein enzymes act upon part of the MCPs that can cause a change in how rotated (think door knob again) the MCP is. If the MCP is always interacting with a nutrient, a protein called CheR changes the MCP structure and causes rotation back towards the non-interacting form of the MCP. When CheA is active, it can activate CheB which reverses the changes caused by CheR. Ultimately, the whole system remains sensitive to new information over a wide range of nutrient concentrations (a 1000-fold range). This whole system and its parts and regulation took several years for me to understand. I’m sure I did not do it justice, but hopefully you can get a small glimpse into the truth about “simple” bacteria. One of the most prevalent ways a bacterium decides this is by using a two component system, or TCS. TCSs are relatively simple compared to chemotaxis. As you would suspect from the name, TCSs are pathways consisting of only two protein members, the sensor histidine kinase and the response regulator. Histidine kinases are a major protein family in bacteria because they are able to sense many different factors in the bacterium’s environment including nutrients, toxins, fellow bacteria, etc. In case you are wondering, chemotaxis is a modified form of a TCS in which the histidine kinase CheA is regulated by the activity of a separate protein, the methyl-accepting chemotaxis protein. What if you are a bacterium and you have been using a certain type of carbon source to generate energy and suddenly that carbon source isn’t as prevalent? In this case, you would want to shut down the enzyme factories that were converting the previous food source into energy and begin preparing new enzyme factories to convert other food sources into energy as you prepare for starvation. If these conditions persist, you might want to decide to hibernate in the form of a spore or cyst until conditions around you improve. Or, if other food sources are sensed in the environment, any special enzymes that would be needed to convert them into energy would need to be synthesized from their respective genes. All of these scenarios are controlled by TCSs. The conditions are used as input for the cell to decide the best strategy to survive and thrive. Histidine kinase activation leads to a hand-off event from the kinase to the response regulator of a molecule which acts as a green light for the response regulator to proceed with its job. This job may be to turn on gene expression to produce proteins needed in the cell. The response regulator’s job may be to shut down gene expression for proteins no longer needed by the cell. It is a carefully orchestrated balancing act evolved over millions of years to make sure only the proteins/enzymes needed by the cell at a given time are present assuring highly valuable energy molecules are not wasted. Getting the message Second messengers are common from bacteria to humans. The major second messenger we all learn about in biochemistry class is cyclic AMP (cAMP). However, bacteria use several nucleotides as second messengers. Many are used as determinants in the decision making process, but one of the most recently discovered (and personal favorite) is cyclic-di-GMP, or c-di-GMP. Bacteria are constantly processing signals both inside and outside their cell membranes. Hard to believe that one of the most abundant response molecules was only discovered in the late 1980s while researching how a certain species, Acetobacter xylinum now known as Gluconacetobacter xylinus, produced cellulose. Almost by accident, the Benziman lab discovered the enzyme responsible for cellulose production (cellulose synthase) was regulated by a nucleotide, later found to be c-di-GMP. Since that discovery, c-di-GMP has become a hot topic among microbiologists and immunologists due to the decisions bacteria make as the level of c-di-GMP increases within the cell. As I learned it, the concentration of c-di-GMP had predictable outcomes on the decisions of bacteria: high levels leads to loss of motility, increase in biofilm formation, changes in cell morphology, and increase in cell-cell communication. When low levels of c-di-GMP are present, the cell decides to move around (motility), become resistant to heavy metals, and, most importantly, becomes virulent. For example, Vibrio cholerae, the bad guy responsible for cholera, only decides to move around and produce cholera toxin when c-di-GMP levels are low in the cell. If levels increase, V. choleraewill produce biofilms via extracellular polysaccharide (EPS) production. You might be asking yourself what controls the c-di-GMP levels of a bacterial cell. The initial discovery in the Benziman lab also found the enzymes/proteins that were responsible for making and breaking the second messenger. The long (and short) names are; for making c-di-GMP from 2 GTP molecules, diguanylate cyclases (DGCs aka GGDEF proteins) and degradation by phosphodiesterases (PDEs aka EAL proteins). GGDEF and EAL proteins are so called due to important amino acids necessary for their functions, GGDEF is glycine, glycine, aspartate, glutamate, phenylalanine, and EAL is glutamate, alanine, leucine. These enzymatic activities are usually controlled by regulatory protein domains common in bacteria (and humans). Signals from the environment (internal or external) can trigger changes in enzyme activity of GGDEFs and EALs thus changing the cellular concentration of c-di-GMP. This mechanism is well understood after 30 years of research. However, what happens next is still essentially unknown. Cyclic-di-GMP levels rise within a bacterial cell. Now what? It was known early on that c-di-GMP itself could then interact with GGDEFs to inhibit activity. But what other proteins interact with c-di-GMP and help these bacteria decide to make major lifestyle changes? It wasn’t until 2006 that bioinformaticians predicted c-di-GMP binding to a protein, or protein domain. PilZ, an obscure protein of unknown function but necessary for Type IV pilimotility, was hypothesized to bind c-di-GMP. By the end of 2007, this prediction was verified and PilZ domain proteins were the first shown linking c-di-GMP to downstream proteins in pathways, or circus rings. Transitioning from a free swimming/moving cell to life in a biofilm community is a major lifestyle change for bacteria. This decision takes commitment which is initiated by a small molecule. Mystery of a mysterious kind The decisions bacteria make weigh heavily on the amount of this molecule found within the cell. However, how a bacterial cell knows how much c-di-GMP there is ultimately remains a mystery. The focus of early research involved finding what regulated the synthesis and degradation of c-di-GMP. Thus, the majority of publications in print focus on the enzymes that perform these functions, GGDEFs and EALs. Also, by deleting certain GGDEFs or EALs from a bacterium, scientists were able to determine what effect this change would have on the cell’s decision making and lifestyle. Most research was performed in medically relevant species like Vibrio and Pseudomonas. This short-sighted focus has led to a distinct role of c-di-GMP in the cell that may not be absolute. I digress… A cell produces c-di-GMP in response to some environmental signal. Now what? Great question; one that is still not answered. Various proteins have been shown to interact with or bind c-di-GMP including the PilZ domain. The list of c-di-GMP effectors has grown slightly over the past few years to include examples of transcriptional regulators in both Vibrio and Pseudomonas (VpsT and FleQ, respectively). Transcriptional regulators are proteins that help carry out the decisions made by a cell by regulating gene transcription. However, these are only two examples from two bacteria. What about the vast number of other bacteria out there? How do they “see” c-di-GMP? Cyclic-di-GMP is a vital component to bacterial decision making even though our knowledge of how it is seen by a cell is a huge unknown. My hypotheses and speculations (with some evidence) In the last chapter of my dissertation (under embargo), I investigated what other protein domains could potentially bind c-di-GMP bioinformatically. Using my methods, I could predict proteins in the nonredundant database that could potentially bind c-di-GMP. One group of proteins I found were those already shown to bind other nucleotides like ATP. I was able to test my method against a publication that biochemistry and proteomics to identify c-di-GMP binding proteins from Pseudomonas. This crude “chemical proteomic” approach identified around 200 potential binding proteins of which the method I created also found several of the same proteins without the exhaustive time and effort of “wet bench” experiments. This is not a post about how good I am at science. This is a post about using new and different methods to answer questions within science not unlike the investigation that identified the PilZ domain as the c-di-GMP binding protein in the first place. Unfortunately, my time in the lab was over before I could test my hypotheses, but my curiosity and passion live on. I will say that I predict receiver domains are very common c-di-GMP binding effectors that will be the next major discovery in this elusive mystery of how cells use c-di-GMP to make decisions. It takes a village The marvels of single celled organisms is that they are able to integrate all kinds of stimuli and make one grand decision that affects how they proceed. Bacteria do in one cell what we as humans do with billions. However, do bacteria contain the ability to think as a group or community? The answer is absolutely. It is called quorum sensing. The pioneer for this research is Bonnie Bassler from Princeton University. Listening to her tell her story of the curiosity she felt when observing how and why a certain group of bacteria emitted light, or bio-luminescence is great. (Watch here). Through her investigation with a insignificant bacterium, Vibrio harveyi, she opened up a whole new field of microbiology. Many bacteria synthesize signaling molecules that serve as messages to other bacteria saying, “I am here”. Since bacteria don’t have senses that we are familiar with like sight and hearing, these signaling molecules tell other bacteria who is around. When there aren’t a lot of bacteria sending out the signal, no big decisions are made. However, when enough bacteria are around to tell all other village members the approximate population, all village members act together to make a committed decision. In the case of V. harveyiit is the production of a light emitting molecule, but for other bacterial species it may be activation of pathogenicity. From the perspective of the bacterium, you don’t want to decide alone to make a big commitment like invading another organism. By taking a bacterial census through quorum sensing, these bacteria make a educated decision only when their population is high enough to make an impact. For some species, this critical number may be less than ten. However, in some cases, the population needs to be in the millions. I think bacteria can teach us a very important lesson via quorum sensing: don’t go it alone. It takes a village.
Most bites from disease-carrying ticks are known to cause fever and other symptoms, including headaches, queasiness, retching and muscular pains, states WebMD. These flu-like symptoms may manifest a day or 3 weeks after being bitten by a tick. Individuals who develop commonly associated symptoms of tick-borne illnesses are advised to seek professional help. Ticks belong to a group of small parasitic arachnids that latch onto the skin of humans and other animals to suck on the blood of their hosts. Although bites from these bloodsuckers are generally harmless, some ticks can transmit dangerous and potentially fatal illnesses, including Lyme disease, Rocky Mountain spotted fever, Colorado tick fever, relapsing fever and ehrlichiosis. Fever that is associated with tick-related diseases is often accompanied by aches and physical discomfort, chills and even rashes. The outbreak of the fever and the degree to which the body temperature abnormally increases may vary from patient to patient, notes the United States Centers for Disease Control and Prevention. In Lyme disease, rashes may develop prior to the onset of fever, typically within 3 to 30 days after a tick bite. In the case of Rocky Mountain spotted fever, rashes usually appear within 2 to 5 days after a patient becomes feverish. Rashes that form in some individuals who contract a fever after a tick bite incident may also indicate the presence of ehrlichiosis.
Time is the foundation of geology. Actually, if you think about it: The geologic time scale of Earth’s creation is almost unimaginable to us. This is because humans lifespans are so short in comparison. We work in hours, days, months and years. But the Earth works in decades, centuries and millions of years. Like a geologic calendar, geologists divide time. From longest to shortest, they are eons, eras, periods, epochs and ages. So that means that timing is everything when it comes to the geologic time scale. The concept of time is very long in geology First, Earth’s age is approximately 4.5 billion years. This is why we use billions, millions and thousands of years. For time markers in geologic time, we typically use abbreviations like ‘Ga’ (giga-annum), ‘Ma’ (mega-annum), and ‘Ka’ (kilo-annum). - ‘Ga’ or ‘Gya’ (billion) is 1,000,000,000 years ago - ‘Ma’ or ‘Mya’ (million) is 1,000,000 years ago - ‘Ka’ or ‘Kya’ (thousand) 1,000 years ago For example, 2.5 Ga refers to 2.5 billion years ago. Because Earth is about 4.5 billion years old, Earth would be about 2 billion years old at this time. So instead of working in days, months and years, geologists work in millions and billions of years. Then, we subdivide these long chunks of time into eons, eras, periods, epochs and ages. Like a geologic calendar, they chronologically order units of time into a geologic time scale. And each division of time identifies a prominent event or characteristic feature based on their record. Eons > Eras > Periods > Epochs > Ages Eons are the longest division of geologic time. Generally, we measure eons as billions of years ago (Ga) and millions of years ago (Ma). Geologists divide the lifespan of Earth into a total of 4 eons. From origin to now, Earth’s 4 eons are the Hadean, Archean, Proterozoic and Phanerozoic Eon. The Hadean, Archean and Proterozoic eons are sometimes grouped as the Precambrian Eon. Eras are divisions of geologic time shorter than eons but longer than periods. In terms of geochronological units, there are 10 defined eras which generally span several hundred million years. For example, the Paleozoic, Mesozoic, and Cenozoic eras are within the Phanerozoic Eon. There are 22 defined periods. Periods are divisions of geologic time longer than epochs but shorter than an era. Each period spans a length of tens to one hundred million years. Next, there are 34 defined epochs which generally last for tens of millions of years. The geologic time scale conceptually consists of periods that we break down into smaller epochs. These epochs are then divided into ages, which are the shortest division of geologic time. In terms of the number of geochronological units, there are 99 defined which can stretch over millions of years. Epochs contain minor differences between each unit. Some geologists divide ages even further. If you do so, chrons are the smallest working geochronological unit. However, these are less common. The Triassic period geologic time scale example Let’s put what we know into practice for the Triassic period which lasted about 50 million years. The Triassic period has a well-defined start and endpoint because it began and concluded with catastrophic mass extinctions. As you can see in the table above, the Triassic period started 252 million years ago after Earth’s largest extinction event in history. It’s also known as the “Great Dying” because it killed 96% of all marine species and an estimated 70% of land species. Then, it had an abrupt ending in 201.3 mya during a less severe Triassic–Jurassic extinction event. If you look closely at the table, you can see that the Triassic period is within the Mesozoic Era. Then, the Mesozoic is part of the Phanerozoic Eon (542 million years ago to now) which is notable for having fossil records. Everything else before this era was in the Precambrian Eon without hard body parts. The Triassic period has 3 epochs and 7 ages. Each of these shorter divisions of time identify a notable event or characteristic feature based on their record. International Commission of Stratigraphy (ICS) The role of the International Commission of Stratigraphy (ICS) is to define global units as it relates to a geological time scale. ICS is the governing body that chronologically orders Earth’s history into eons, eras, periods, epochs and ages as an International Chronostratigraphic Graph. From signatures inherent in rocks, geologists can realize the true age of the earth. By tracing back on embedded fossilized organisms in rock strata, we can achieve a context of time. Because we can date rocks through stratigraphy, we can achieve a better understanding of geological events in Earth’s history. Based on our geological time scale, we can better understand the theory of evolution and origin of life on our planet. Tick, tock fellow geologists Never stop learning about Earth’s history. Now, you have a good understanding that time is foundation if geology. Also, you have a good background in how to break up time into geological units. Let’s put these concepts into practice. Check out these carefully-selected articles to boost your knowledge on Earth’s history and timeline: Since Earth’s creation 4.5 billion years ago, oceans, continents and life have emerged. From evolution to extinction, here are the geological events and history of Earth timeline How old is Earth? And when did geological time begin? By dating meteorites, we find the oldest Earth rocks are also the same staggering 4,543,000,000 years old. When you unveil a fossil, it’s like rewinding into the past. Fossils are preserved remains from past living things such as bones, shells or exoskeletons. Since the Cambrian Era, we have unearthed fossils and put them in chronological order.
On the contrary, black cats are seen as symbols of good fortune. In ancient Egypt, among the cats, black cats were particularly sacred. It can be seen that Egypt’s fertility and abundance god ‘Bastet’ was regarded as a symbol of good luck when the face of a black cat was seen. Many cattle bones appeared in Egyptian tombs estimated to be between 4000 and 5,700 years ago, and there are many traces of careful burials that can be seen in the treatment of cats. Thus, ancient Egypt encouraged the country to raise cats at the national level in order to catch rats while agriculture, where cats were regarded as deputies of gods, and anyone except the pharaoh, God’s messenger, was killed when the cat was killed, It is said that it received the reduction exemption. It is said that the various names of Cat that come down now derive from Bast ‘s “Uzat” relic. There is also a story that Persia, who was in war with Egypt, trotted live cats into shields. Naturally, the Egyptian army was embarrassed and lost in the fight. In ancient Egypt, it was also forbidden to carry cats out of the country. There was a god who looked like a cat like Bastet, and a cat mummy is among the artifacts excavated in Egypt because it is made into a mummy and funeral even if it dies like a person.