content
stringlengths
275
370k
Circuit Slapper: A game for motor learning This video demonstrates the use of simple materials (sponge, plastic rod, and aluminum tape) and a circuit board to create a circuit slapper. The child maintains a grip with a pronated wrist while modulating timing and speed to jump in the game. Hitting the two strips of aluminum tape on the table completes the circuit and the player in the game jumps. This game could be adjusted to meet other goals by changing the size of the rod for gripping, the weight of the object held, another game could require more visual scanning or have more than one button to coordinate movement, lastly, the location of the two strips of aluminum tape could move from the table to a wall for external rotation or to the side of neglect.
Gamma-Ray Bursts hint at birth of Massive Neutron Stars New analysis of Gamma-Ray Bursts highlights evidence of formation of massive neutron stars, prior to their collapse into black holes. Astrophysicists from the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav) at Monash University have provided an analysis of 72 short-duration Gamma-Ray Bursts (GRBs) observed by NASA’s Neil Gehrels Swift Satellite and found in 18 cases, the resultant object from the merger produced a massive neutron star – before this later collapsed into a black hole. By looking at the data of these 18 cases – researchers have been able to describe the physical properties of neutron stars with results indicating that these neutron stars are consistent with having a freely-moving ‘quark’ composition and a composition like regular matter, i.e. composed of atomic nuclei—the building blocks of the Universe. Quarks are elementary particles that contain protons, neutrons, and atomic nuclei. In regular matter, these quarks are confined inside protons and neutrons, but in the high density and high-temperature regimes seen in neutron stars, they may move around freely. “Our observations show a slight preference for freely-moving quarks. We look forward to getting more observations to definitively solve this puzzle” said OzGrav Ph.D. student and the lead author on the paper. Gravitational Waves, GRBs and X-Rays In August 2017, gravitational-wave interferometers detected a signal corresponding to the merger of two neutron stars – the first detection of its kind. Scientists had long suspected that these merger events would also produce an electromagnetic (EM) counterpart signal – and 1.7 seconds after the event, global telescopes registered the GRB from the resultant Kilonova event. It was during this event, and detailed within the EM light signature that scientists were able to determine that heavier elements like Gold, Platinum and Strontium are created in these violent mergers – along with providing the most accurate record of the speed of gravity. However, astrophysicists wanted to understand what the result of such mergers would produce – would they create a black hole (as expected) or something more exotic? Nikhil, along with OzGrav colleagues Paul Lasky, and Gregory Ashton set out to explore this by reviewing the properties of GRBs. When a short-duration GRB occurs, it usually includes a lower, broadband emission – the result of the interaction of the jet from the Kilonova colliding with the surrounding medium. The X-ray signals within the GRBs often include two features that are not explained by the lower, broadband emission – a plateau and a steep decay in the signal that occurs for a long period after the event. It is these two features within the x-ray data that indicate that the result of the merger is not an immediate black hole, but rather a long-lived, rapidly rotating, highly magnetized neutron star. The steep decay itself is the sign that the neutron star is collapsing into a black hole. When these new super-sized neutron stars are born, they are above a non-rotating limit known as the Tolman-Oppenheimer-Volkoff mass – and as they lose their centrifugal forces from their massive spin rate, gravity takes hold causing the object to collapse in on itself and into a black hole. Another outcome of the findings indicated that just before the super-sized neutron stars collapse under their own gravity into black holes, they unleash tiny gravitational wave signals – so small, they are even outside the range of current detectors like LIGO. “With the construction of more sensitive gravitational-wave detectors, such as the Einstein Telescope in Europe of the Cosmic Explorer in the US, we are confident that we’ll eventually detect individual gravitational waves from these systems,” explained Sarin. The new research has shown that the data measured as a result of the X-ray afterglow from Kilonova events, can be used to confine and determine what makes up the inside of neutron stars – one of the densest objects in the universe (a single teaspoon would way as much as a cube of Earth with sides 800m in length). Combining this data with the gravitational-wave signals from such an event can provide scientists with further information about these violent mergers – where so many elements we find here on Earth – were once forged. The paper is currently published on arXiv
FYI: What’s The Darkest Material On Earth? This material absorbs 99.970 percent of light, making it an ideal coating for solar panels. The idea of dark materials might sound familiar to you if you read fantasy trilogies or like casually memorizing lines from Paradise Lost. Unfortunately, this material isn’t used to create more worlds–but it might help save this one. Vertically aligned carbon nanotubes (VACNT), the darkest material known to man, was developed by researchers at Rennselaer Polytechnic Institute (RPI) in 2007. With the ability to absorb 99.970 percent of light, VACNT has significant implications in solar energy research. For instance, it can be used to improve the efficiency of solar panels. Rennselaer’s researchers aren’t the only ones attempting to produce ultra-dark materials. They’ve been in a quasi-competition with NASA, which developed a material also made of carbon nanotubes and created using the same process. But at only 99.5 percent absorption, it is not quite as dark as Rennselaer’s VACNT. Into The Nanoforest Why a race for dark materials at all? Isn’t regular black paint dark enough to absorb all the colors of light? Conventional black paint and graphite absorb most visible light but reflects a significant amount due to dielectric interface–a moderate reflection of 5 to 10 percent in the air. Researchers found that they could create a super black object by developing long, low-density nanostructures with deep pores, ordered in arrays. In the static electron micrograph, the material looks almost like a forest. While scientists have not yet reached the goal of achieving near-zero reflection, RPI’s dark material–and future, better versions of it–can be used for solar energy conversion and pyroelectric detectors. Since the material absorbs light, it could also be used in cooling applications. No wonder there’s a race to perfect it–in a warming world, it could be pretty darn useful. Have a burning science question you’d like to see answered in our FYI section? Email it to [email protected].
What is lactose? Lactose is a natural sugar that is present in cow’s milk and breast milk. Lactose is obtained by a technical process (ultra-filtration, crystallisation and drying) from the whey of cow’s milk. This technology enables the production of a highly pure product. As a sugar, lactose belongs to the carbohydrates group, as do fructose and glucose, for example. Lactose consists of two different sugar moecules, one molecule being galactose and the other being glucose. In relation to conventional household sugar, lactose is split relatively slowly and can therefore reach deeper areas of the intestines, where it has a positive effect. Lactose gently regulates digestion. Lactose is partially fermented in the intestines. The resulting lactic acid ensures healthy intestinal flora and simultaneously stimulates intestinal movement. Furthermore, lactose also promotes the intake of calcium and zinc from the intestine. Lactose and BSE According to todays knowledge, it has so far not been possible to transfer the BSE pathogen onto milk. There is therefore no need to worry about consuming milk or milk products.
What does it do? The Math program demonstrates literal (constant) and register-based (variable) math instructions. During this activity, you will learn about these microcontroller instructions: |addlw||'add literal to W' - adds the contents of a constant specified in the program to the W register.| |sublw||'subtract W from literal' - subtracts the contents the W register from a constant supplied by the program.| |addwf||'add W to file register' - adds the contents of the W register to the contents of a file register. The result of this addition can be left in the W register or in the file (RAM) register by appending W, or F to the instruction, respectively.| |subwf||'subtract W from file register' - subtracts the contents of the W register from the contents of a file register. As in addwf, above, the result can be stored in W or the file register by appending W or F to the instruction.| Math programming activity The Math program demonstrates the three primary ways of performing addition and subtraction operations in mid-range PIC microcontrollers as well as introducing the functions of the status register bits Z, DC and C. This program also demonstrates the use of the equate directive to reference a RAM register. What you should know before starting Microcontroller related information The only two types of mathematical operations directly supported in mid-range PIC microcontrollers are addition and subtraction. Each math operation can be performed using constant or variable data, resulting in a total of four available math instructions: addlw, addwf, sublw and subwf. The internal W register (see the simplified PIC16f886 diagram) is used during all math operations in the mid-range PICmicro microcontrollers. One of the numbers being added or subtracted needs to be loaded into W before the math operation can proceed. This first number can be a constant supplied by the program, or a variable quantity stored in the RAM file registers, and is simply moved into W before the math operation. Then, the math instructions perform their operations on the number already in W and a second number that can be supplied as a literal (constant) or the contents of a file register. Finally, math register operations can either store their result back in the register (thus over-writing it), or in W. Status register bits The Count program used the Z bit in the Status register to check for a zero result in the TMR0 timer register. The Z (Zero) bit, as well as the DC (Digit Carry) and C (Carry) bits of the Status register are all affected by all math operations. During simulation, the state of the Z, DC, and C bits is shown in the bottom status bar of the MPLAB IDE. Their state is represented by the letter state—an upper-case letter means the bit is set, and a lower case letter means the bit is clear. For example, seeing Z dc C indicates that a Carry has taken place, the result of the operation is zero, and no digit carry occurred. Each Status register bit corresponds to specific states: - a zero result of any addition or subtraction will set the Z bit, a non-zero result leaves it cleared; - an addition overflow (adding two 8-bit numbers to produce a 9 bit result) will set the C bit, effectively making it the 9th bit of the result; - an addition overflow from the low nybble (the least significant 4 bits) to the high nybble (the most significant 4 bits) in a byte results in the DC flag being set; - a subtraction operation will set the C bit before subtracting, and borrow from it (if necessary) during the subtraction—1 means no borrow occurred, 0 indicates a borrow took place. Using registers as variables The programs we have created up to this point have used some of the pre-defined registers in the PIC16F886 microcontroller. Looking at the simplified PIC16f886 diagram, we can see that there are a total of 368 RAM addresses available for use in the PIC16F886, with the first free memory register residing at address 20h in bank 0. Memory registers can be labelled with variable names, and have their names assigned to the memory address using equate directives. As with earlier equate examples, the equate statement has no inherent knowledge of what is being equated: bits, bytes, or addresses. Equates simply allow a numeric value to be substituted for the label name later in the program. To use this program you will need: An assembled CHRP 3 board, an optional power supply, a programmer and/or programming cable, and a computer with the MPLAB IDE or MPLAB X software as described in the Output activity. Create the program The entire MATH.ASM program is shown below. Create a Math project in MPLAB, copy this code into it, and build the program. ;Math v3.1 January 14, 2013 ;=============================================================================== ;Description: Demonstrates math between constants, constants and registers, ; and registers. ;Configure MPLAB and the microcontroller. include "p16f886.inc" ;Include processor definitions __config _CONFIG1, _DEBUG_OFF & _LVP_OFF & _FCMEN_OFF & _IESO_OFF & _BOR_OFF & _CPD_OFF & _CP_OFF & _MCLRE_ON & _PWRTE_ON & _WDT_OFF & _INTOSCIO __config _CONFIG2, _WRT_OFF & _BOR40V ;Set RAM register equates. num1 equ 20h ;RAM storage register for the first number num2 equ 21h ;RAM storage register for the second number ;Start the program at the reset vector. org 00h ;Reset vector - start of program memory clrf PORTA ;Clear all port outputs before configuring clrf PORTB ;port TRIS registers. Clearing RA4 turns on clrf PORTC ;the Run LED when TRISA is initialized. goto initPorts ;Jump to initialize routine org 05h ;Continue program after the interrupt vector initPorts ;Configures PORTA and PORTB for digital I/O. banksel ANSEL ;Switch register banks clrf ANSEL ;Set all PORTA pins to digital clrf ANSELH ;Set all PORTB pins to digital movlw 01010111b ;Enable Port B pull-ups, TMR0 internal movwf OPTION_REG ;clock, and 256 prescaler banksel TRISA ;Switch register banks movlw 00101111b ;Set piezo and LED pins as outputs and movwf TRISA ;all other PORTA pins as inputs clrf TRISB ;Set all PORTB pins as outputs for LEDs banksel PORTA ;Return to register bank 0 main ;Preload the number registers for later use movlw 6 ;Save these constants in the RAM registers movwf num1 movlw 4 movwf num2 conCon ;Demonstrate math operations using two constants. movlw 7 ;Load W with the first constant and addlw 5 ;add the second constant to W movlw 4 ;Load W with the first constant and sublw 13 ;subtract it from the second constant regCon ;Demonstrate math operations between a constant and the contents ;of a file register. movf num1,W ;Copy the contents of num1 into W and addlw 8 ;add a constant to W movlw 2 ;Load W with a constant and addwf num1,W ;add the contents of num1, keeping the ;result in W addwf num1,f ;Now, add contents of num1 to W again, ;overwriting the contents of num1 regReg ;Demonstrate math operations between the contents of two ;file registers. movf num1,W ;Copy the contents of num1 into W and addwf num2,W ;add contents of num2 to W movf num1,W ;Copy the contents of num1 to W again addwf num2,f ;and add to num2, overwriting num2 sleep ;Stop at end of program end You won't need to download this program to your CHRP board—just run it in the MPLAB Sim debugger. Set up a watch window to watch the contents of the W register and file registers 20h and 21h. Step through the program and observe the contents of the registers after each step. Also observe the contents of the Status register bits: Z, DC and C. How the program works Really? Hopefully you got to this section by simulating and reading along, and didn't jump here to find out how the program really works. After having built and simulated the program, you should have been able to follow the code to figure out its exact sequence of operations. There really are not any complex operations taking place. The only really important thing to keep in mind when doing register math is determining where you would like the answer to be. Beware that specifying a file register destination (as in addwf num1,f) over-writes the original number in the register. Using ,W as the destination leaves the contents of the file register untouched. Test your knowledge - Add two numbers that produce a sum equal to or less than 255. What is the state of the Z, DC and C bits in the Status register? - Add two numbers that produce a sum equal to 256. What is the state of the Z, DC and C bits this time? - Add two numbers that produce a sum greater than 256. What is the state of the Z, DC and C bits after this addition? Apply your skills The Math program demonstrates the used of the sublw operation, but not subwf. Modify the regCon and regReg subroutines to perform subtraction instead of addition. - Sublw performs the subtraction l-W (literal - W). Does subwf do the subtraction W-f, or f-W? How do you know? - Set up your program so that one subtraction produces a zero result, and a second subtraction produces a negative result. What is the state of the Status register's Z, DC and C bits after each subtraction. - Bonus: Sometimes there is a need to add 16-bit numbers, in which two 8-bit register store the high byte and low byte of the 16-bit word. Adding two low bytes may produce a carry, as might adding two high bytes. Create a program that adds two 16-bit numbers and stores its answer in a 24-bit result register.
What is Montessori? Montessori education builds a child’s capability to become a fulfilled and productive adult able to contribute to the world—at home, at work, and in their community. Maria Montessori’s observation of human development from birth to adulthood led to an education approach that supports children’s natural development, providing the skills and support to reach their full potential in life. With a strong emotional, behavioural, and moral foundation, children become motivated, active, and independent learners who are prepared for the real world. The Montessori approach provides children with enduring intellectual capabilities, achieved through the framework of social and emotional learning. Academics and knowledge-building are key qualities of Montessori, as is the ability to think creatively and understand the needs of others. When these fundamental skills are fostered early in life, children gain the capability to problem-solve, persevere, and interact well with others in any circumstance. Unlike traditional classrooms, Montessori learning environments are designed to fit the specific needs of each child’s stage of development. Learning is all about the activity and independence of the child to find out what they need at each particular moment. Authentic Montessori environments encompass the following principles: - Mixed age groups which not only offer a wide range of activities to spark children’s interest but also enable children to learn from others and learn by helping others. - Freedom for children to work at their own pace, without interruption, choosing from a range of activities that are developmentally challenging and appropriate. - Exploration is encouraged so that children find things out for themselves, make mistakes and correct them independently. - Respect for each child as an individual personality with unique talents. - Respect for others, the community and the Montessori teachers are experts in child development, guiding children to learn independently and reach their unique potential. Educateurs sans Frontières Educateurs sans Frontières (EsF), a division of AMI, functions as a social movement that strives to promote the rights of the child throughout the world, irrespective of race, religion, political and social beliefs. EsF is committed to transcending borders in order to serve children through innovative educational initiatives using Montessori principles and practices. EsF initiatives in Kenya include the Corner of Hope schools and teacher programme in Nakuru and the Montessori Samburu nomadic schooling programme for the pastoral Samburu community in the Namunyak Conservancy. Find out more at www.montessori-esf.org Aid to Life This initiative is founded on the idea that children develop optimally when they are brought up in an environment that supports their natural development, with an adult who understands how to connect them to positive activity and then allows them enough time to grow and develop according to their own pace and rhythm. It aims to give parents clear, simple, straightforward advice in a format that is easy to understand and apply. Find out more at www.aidtolife.org.
Con ● duc ● tiv ● i ● ty / kän dək tivədē/ The measure of a material’s ability to transfer heat. Thermal image of heat loss at concrete balcony connection What is the effectiveness of structural thermal breaks at reducing heat loss? The thermal conductivity of a material is an important variable in determining the rate at which heat flows through that material. Heat flow also is dependent on area and temperature. Given the same boundary conditions of, temperature difference across two materials, and the same area and thickness, the material with the higher thermal conductivity will transfer heat at a higher rate. Thermal breaks are characterized either by their thermal conductivity (k) or thermal resistance (R). The two values are related so either value can be calculated from the other. To determine the effectiveness of a thermal break at reducing heat loss, a thermal model should be created of the detail within the building’s wall or roof assembly. The k or R value of all the materials in the assembly are required in the model. Why is modeling necessary? Two reasons: First, heat does not flow in parallel paths when highly conductive construction materials are combined in an assembly. If it did, we could use simple math and area-weighted averaging to determine heat flow through an assembly. Second, many interface and transition details are complex and involve corners or other features that make it difficult at best to calculate heat flow. Steel Z girts can occupy perhaps 10% of a buildings’ exterior wall surface. Yet, they can reduce the clear field R value of a wall by as much as 50%. Balconies on a building can occupy 3% of the exterior wall surface. It has been shown that balconies can be responsible for as much as 30% of the heat loss in a wall assembly. Area weighting and parallel heat path assumptions will lead you down an inaccurate path….. Thermal model of a typical masonry facade assembly using a shelf angle at the floor slab. Note the temperature differences at the angle and within the floor slab. The results of a thermal model will tell you: • The actual R value and U value of the assembly without a thermal break • The adjusted or effective R and U values with a thermal break applied • How much heat loss is due to the thermal bridging detail • How much the thermal break improves that heat loss • The surface temperatures of the materials in the assembly which will indicate whether condensation is likely To be effective, a thermal break has to have a much, much lower thermal conductivity than the material it is “breaking.” Does thickness matter? In short, yes. For all materials, conductance is a function of thickness. Recall, that heat flow is a function of area, temperature and thermal conductivity. Since the conductance of a material is a function of its thickness, both thickness and area are important in heat flow calculations for a thermal break. In some instances, using a thermal break that is too thin, can have adverse results! Let’s look at a steel beam supporting a balcony. If we “break” the beam where it passes through the thermal envelope to incorporate a thermal break, we probably need to add an end plate on either side of the break in the beam to connect the thermal break. When we do this, we increase the contact surface area of the steel. Without a thermal break, we would be making the heat flow through the beam worse. With a thermal break, we need to acknowledge that the conductance of the thermal break material is a function of its thickness and the heat flow through the now thermally broken connection is a function of that, plus the area of the connection. Too thin and the impact of the thermal break is lost due to the increase in area of the highly conductive steel. Modeling of several thermal break solutions has shown that the thickness should be at least 1” to achieve any significant reduction in heat loss. This of course does vary by application and assembly. In any connection design using a thermal break, the goal is to find the appropriate thickness/area combination that helps the wall or roof assembly meet the U value requirement based on climate zone and energy code. Image Credit: Morrison Hershfield BETBG, Appendix B, Catalogue Material Data Sheets (Version 1.3) Heat flow associated with steel purlins in a metal building roof. Note the cooler temperatures of the purlin interior surfaces due to thermal bridging. While a properly designed thermal break will considerably reduce heat flow, thermal breaks are also effective at keeping connection surfaces above the dew point. This is particularly important for connection details in buildings where higher than normal relative humidity values exist (hospitals or natatoriums) or in the Southern part of the United States where humidity levels are higher. A secondary benefit of using a thermal break is to control material surface temperatures. Thermal bridging allows heat from the interior conditioned space to flow out toward the exterior of the enclosure. When the temperature difference is large, interior material surfaces cool. To prevent the potential of condensation forming in the thermal envelope, the surface temperatures of the materials within the envelope must be kept above the dew point temperature. A risk of condensation is possible if thermal bridges that pierce the envelope are not addressed by using a thermal break.
Linear regression is useful when we want to predict the values of a variable from its relationship with other variables. There are two different types of linear regression models (simple linear regression and multiple linear regression). In predicting the price of a home, one factor to consider is the size of the home. The relationship between those two variables, price and size, is important, but there are other variables that factor in to pricing a home: location, air quality, demographics, parking, and more. When making predictions for price, our dependent variable, we’ll want to use multiple independent variables. To do this, we’ll use Multiple Linear Regression. Multiple Linear Regression uses two or more independent variables to predict the values of the dependent variable. It is based on the following equation that we’ll explore later on: You’ll learn multiple linear regression by performing it on this dataset. It contains information about apartments in New York. Before we start digging into the StreetEasy data, add this line at the end of script.py: And then press run to see the graph! In this example, we used size (ft²) and building age (years) as independent variables to predict the rent ($). When we have two independent variables, we can create a linear regression plane. We can now guess what the rent is by plugging in the independent variables and finding where they lie on the plane.
International organisations such as the World Bank prefer to measure development using economic indicators. There are three main economic indicators which are used to give an indication of the overall economic health of a country. This post has primarily been written for students studying the Global Development option for A-level Sociology. Three Economic Indicators of Development - Gross Domestic Product (GDP) is the total economic value of goods and services (expressed in US dollars) produced within the borders of a country in the course of a year and available for consumption in the market place. - Gross National Product (GNP) is the same but includes the value of all services produced at home and abroad. A country such as Ghana will have a relatively similar GDP to GNP because it doesn’t have many companies which produce things abroad: most production takes place within Ghana. America, on the other hand, which is where many Transnational Corporations are based, has a much higher GNP than GDP – Think about MacDonald’s for example –all of those Big Macs sold outside of the USA won’t appear in the GDP of the USA but will appear in the GNP. - Gross National Income (GNI) a hideous oversimplification of this is that it’s ‘Gross Domestic Product + the additional income that self-employed people pay themselves +income received from abroad’. This matters to a lot of developing countries who don’t produce much but have large diasporas, or populations living permanently abroad. Take Gambia for example (the country Paul Mendy takes your old toys to at Christmas) – 1/6th of its GNI is from money sent by relatives who abroad, this would not be included in either GDP or GNP. You get slightly different country rankings if you use GNP or GDP rather than GNI. Don’t worry too much about the differences between the above – with a few exceptions* most developing countries tend to have similar GDPs, GNPs and GNI*s. |GDP and GNI per capita in India| |*If you look at India’s Gross Domestic Product, it is the 6th richest country in the world, but if you look at its Gross National Income per Capita, it falls to the mid 100s, due to its enormous population, abut also due to the fact that it consumes a lot of the goods it produces itself, so it doesn’t export much, so there’s not a lot of income coming into the country.| ‘Per Capita’ and ‘Purchasing Power Parity’ - Gross National Product Per Capita – GDP/ GNP are often divided by the total population of a country in order to provide a figure per head of population, known as GDP/ GNP per capita. - The cost of living varies in different countries – so one dollar will buy you a lot more rice in India than it would in America. Purchasing Power Parity figures for GNI per capita factor in the cost of living which is useful as it gives you more of an idea of the actual standard of living in that country for the average person. Gross National Income Per Capita This section provides a closer look different levels of ‘development’ according to this particular economic indicator. Remember, global rankings will vary depending on whether you use GNI, GNP, or GDP. One measurement of development The World Bank uses is Gross National Income (GNI), which can be crudely defined as the total value of goods and services produced in a country in a year plus any income from abroad. If you divide GNI by the number of people in the country, you get the average amount of income per person, or GNI per capita. GNI per capita is widely regarded as a good indicator of the general standard of living in a country, and it is a good starting point for giving us an idea of the extent of global inequalities between countries. For example, the United Kingdom has a GNI per capita of about $43 000, while India has a GNI per capita of about $1600, which is more than 20 times greater. The World Bank’s map of countries by Gross National Income per capita map is a useful, interactive resources to easily find out how most countries fair by this indicator of development. The World Bank’s Four Income Categories The World Bank categorises countries into one of four categories based Gross National Income per capita (per head): high, upper middle, lower middle and low income countries. - High income = $12, 536 or more – about 60 countries, including most of Europe - Upper middle income = $4,046 – $12,535 – about 60 countries, includes South Africa and China - Lower middle income = $1,036 – $4, 045 – about 50 countries, mostly in Africa, includes India - Low income = $1,035 or less – about 30 countries, mostly in Sub-Saharan Comparing countries by GNI per Capita and total GDP Top ten countries – GNI per capita (source) |—||Isle of Man (UK)||75,340||2017| |—||Channel Islands (UK)||66,230||2007| Top ten countries Total by Gross Domestic Product (source) |2||China[n 2][n 3]||14,860,775| Question to consider: Why do you think the top ten countries are so different when judged by total GDP compared to GNI per capita? Evaluating the Usefulness of Economic Indicators of Development Three Advantages of using GDP/ GNP/ GNI as an indicator of development - GNI figures provide a snap-shot indication of the huge difference between the more developed and less developed countries. In 2016, the GNP per capita in the UK was $43000 while in India it was only $1600. This means that there is 20 times as much money per person in the UK compared to in India - Gross National Income figures are also closely correlated with social development – generally speaking the higher the GNI per capita, the better the education and health indicators are in a country. - Total GDP figures give us an indication of who the most powerful nations are on earth in terms of military power. It’s not a perfect correlation, but the USA, China, Russia and the UK are all in the top ten for GDP and they are the biggest arms producers and consumers in the world too. Four limitations of using GDP/ GNP/ GNI as an indicators of development - Quality of life (Social Development) may be higher or lower than suggested by GNP per capita. - They don’t tell us about inequalities within countries. The USA has one of the highest GNPs in the world but some extreme poverty. - A lot of production in developing countries may not be included. For example, subsistence based production is consumed locally in the community, and not sold in the market place. Similarly goods obtained illegally on the black market are not included in GNP measurement - They are very western concepts, equating production and economic growth with development. Some countries may not want economic growth and have different goals (Bhutan) The United States – economically developed but socially retarded? The USA is a good example of a country that demonstrates why we can’t rely on economic indicators alone to give us a valid indication of how developed a country is. Despite ranking number 1 for total GDP, the USA does a lot worse on many social indicators of development – See this post – ‘The USA – an undeveloped country?’ for more details. - Define Gross National Income Per Capita and be able to identify some high income and lower income countries. - Explain the difference between GNI, GDP, GNP, and understand the significance of Purchasing Power Parity. - Outline three strengths of using economic indicators of development - Outline at least three reasons why GNP may not be valid measurements of ‘development’
- Nothing is good ever good enough - I feel so alone and lonely - I am a complete failure These are some examples of the words of shame. Shame is one of the most toxic and intense emotions we can experience. Shame is universal – we all have it. However, few people are willing or able to talk about it – and the more we don’t talk about it, the worse it becomes. Shame is a silent epidemic which is psychological, social, and cultural. In order to deal with shame, we need to develop the language to talk about it before we can process our experiences in a meaningful way. This is why understanding shame and developing shame resilience is so important if we hope to live in a wholehearted way. Brown (2007) defined shame resilience as a person’s ability to recognize and understand shame, move through it constructively while maintaining a basic level of authenticity, and increase his or her level of courage, compassion, and connection which are antidotes to shame and the basic elements of wholehearted living. At Midwest Center for Human Services, Shame Resilience Treatment is offered in both individual and group formats. Shame Resilience Treatment includes learning about and processing shame as a gradual, active process in order to foster the development of effective shame resilience skills. Shame is a universal, intensely painful feeling of unworthiness. However, through this treatment program, through discussions about shaming experiences, developing a vocabulary and definitions around topics of shame, and ultimately building shame resilience can have profound effects on your life. Many participants have expressed that their experiences within a shame resilience program has added to their relationships, reinforced healthy choices, given them insight to connect with others, and opened their awareness to authentic living.
From March, lockdowns across the globe saw streets become ghost-like relics of familiar, bustling centres of commerce. Travel restrictions grounded airplanes, train travel screeched to a halt and the sound of car engines was largely silenced. The peculiarities of this time are manifold and run deep. Seismologists have discovered that lockdowns across the world resulted in the longest and most pronounced quiet period of ‘seismic noise’ in recorded history. To detect earthquakes, geoscientists use seismometers to eavesdrop on the seismic noise emitted from tectonic shifts that ripple through the Earth. However, human activity on the surface also causes vibrations that propagate into the ground as high-frequency seismic waves. ‘Whether we drive our car, catch the train, or touchdown on an airport runway, each of us contributes to anthropogenic seismic noise,’ explains Stephen Hicks, a seismologist at Imperial College London. The Royal Observatory of Belgium, along with five academic institutions, including Imperial, have gathered seismic noise data from a global network of 268 seismic stations in 117 countries. During lockdowns, they recorded a global median reduction in seismic noise of up to 50 per cent – leading some researchers to coin the term ‘anthropause’. Predictably, the strongest reductions in seismic noise occurred in populated environments: a 50 per cent reduction was recorded as tourism was grounded in Barbados and the Sri Lankan city of Kandy; while a 33 per cent reduction was seen in Brussels, where lockdowns were enforced from 18 March. The pronounced period of quiet has its uses. The muting of anthropogenic noise allows scientists to focus on natural tectonic sounds. ‘Anthropogenic noise has always been an unwanted artefact on seismographs. Smaller signals from natural tectonic sources can get lost in noise resulting from the anthropogenic activity occurring at the surface,’ says Hicks. Large-magnitude earthquakes are generally accompanied by smaller signatures of tectonic sounds that are often obscured by the hubbub of anthropogenic activity. Lockdowns present the best opportunity to date for seismologists to pinpoint these small signatures. ‘If we can detect the signatures of smaller earthquakes that occurred during lockdown, we might be able to go back through the archives of seismograph data and find similar signatures that might, for example, accompany larger earthquakes,’ says Hicks. Discoveries are already being made. During lockdown, a magnitude-five earthquake occurred southwest of Petatlán, Mexico. Due to the reduction in anthropogenic noise, seismologists were able to hear its tectonic sound signatures more clearly. The newly identified signals could be used as templates to monitor tectonic unrest in the future, potentially hastening earthquake-prediction methods. Hicks explains that lockdowns have provided a rare opportunity for his field. ‘With growing urban populations in tectonically active areas, such as Tokyo, San Francisco, or Santiago, anthropogenic noise is going to increase,’ he says. ‘With urbanisation, it’s becoming more important that we understand the small tectonic sound signatures, so that we can better forecast large-magnitude earthquakes. We’ve never really been able to quieten anthropogenic noise because we’ve never had a coherent shutdown – now, new avenues of research are opening up.’
In the past, when we were in school, we were taught that Pluto was the smallest planet in the solar system, and the ninth planet from the sun. However, as human beings continue to search for knowledge from the unknown, this theory is certainly outdated. Today, Pluto is called a “dwarf planet.” A dwarf planet orbits the sun just like other planets, but it is smaller. A dwarf planet is so small that it cannot clear other objects out of its path. Apart from this revelation, scientists have recently made public some stunning secrets about Pluto. In July 2015, the National Aeronautics and Space Administration’s (NASA) New Horizons spacecraft was able to fly past Pluto. With its super-powerful cameras, the spacecraft captured a heart-shape region of Pluto. This region has now been named Tombaugh Regio. It was named Tombaugh to remember Clyde Tombaugh, the discoverer of Pluto. After successfully retrieving all the data that New Horizons captured during the mission, scientists have made public, some amazing revelations about the planet, which are completely new to us on Earth. Scientists involved in analyzing the data have reported their findings in five articles published last week in the Journal Science. It has been discovered that Pluto is largely an ice planet. The left half of it is covered mostly by nitrogen snow, while the right side is more methane ice. For us on planet Earth, we know that the ice is more commonly frozen water. But on Pluto, nitrogen, methane and carbon monoxide freeze solid. Below, is a color image of Sputnik Planum, the region known as Pluto’s “heart,” which is rich in nitrogen, carbon monoxide and methane ices. Scientists have also discovered blocks of water-ice jammed together to form mountains, which stand starkly adjacent to the flat plains of nitrogen-rich ices. The darker blocks in the plains region are likely icebergs of water-ice floating on top of denser nitrogen ice. Scientists said the varying mix of ices could form different alloys with very different properties, similar to how adding carbon transforms iron into steel, and that it could help explain the wide range of topography. The picture below was released by scientists to support their analysis. Scientists had earlier observed from Earth that Pluto was blotchy. Therefore, the flight trajectory ofNew Horizons was designed so that it would capture both dark and light blotches, during its flyby. Still, they wouldn’t have been surprised if the landscape turned out to be geologically bland. That’s because the sun, three billion miles away, provides little energy, and Pluto is so small – smaller than the Earth’s moon – that its interior could have cooled down long ago, according to scientists. Scientists also discovered a volcano-like structure on Pluto. It is predicted that nitrogen might be flowing deep to be warmed by the interior, and then erupt back at the surface, producing what scientists are surmising might be an ice volcano. It was also discovered that a mountain of about two miles, spanning 90 miles across, has a hole at the center. The mountain has been named “Wright Mons.” Another finding of Pluto, is the upper atmosphere, which is to be much colder, meaning that nitrogen escapes at a rate of about a hundredth of what had been expected. Some scientists had expected Pluto to look somewhat like Triton, a Pluto-size moon captured into orbit around Neptune. But New Horizons photographed a dazzling variety of landscapes, from soaring mountains to flat plains. The images proved that Pluto is far more diverse, and differing from Triton. “It’s not like any feature we’ve seen anywhere else in the solar system,” said John R. Spencer, a planetary scientist at the Southwest Research Institute. The head of Geophysics and Imaging at NASA’s Ames Research Center in California, Jeffrey M. Moore was also quoted as saying “the big surprise is that Pluto turned out so surprising.”
The Cause of Earth's 23 Degree Tilt The Primary Topics: The Process Leading up to Ejection The Process Planetary Tilt-Rotational Stabilization In this solar system where the rotational axis of a magnetic planet is in close proximity to the Sun, the natural equilibrium dictates a parallel North-South Relationship between the cosmic objects. So what would cause the planetary rotational tilt of the Earth to be 23 degrees off center of the Sun's axis the dominant force in this solar system? Would not this anomaly over time, the axis migrate back towards the equilibrium point represented by the Sun? It is stated, that an object in motion remains in that motion unless acted upon by an outside force. Is this not one of your most revered laws of physics? If this planetary system was initiated by the homogenous motion of the collapse of an accretion disk of stellar matter, then Earth's tilt is a result of an outside force. One that is so recent and periodic where the seasons are a permanent part of Earth's climate. Consider that the effect of the Sun's magnetic field has a far greater intensity and seemed to have little time to correct it. Conclusion there is a cause and if this force is not part of present theories or recognized events within the solar system, but its effects are recorded by tilt, and unexplained geological events. The force comes and goes, so how and what is the cause? The cause is a planet that passes through this solar system periodically causing a disruption in civilizations, where little of its history is recorded except as legends passed down from the survivors. Upon public discovery almost all theories in the field of astrophysics developed by mankind will reluctantly have to be discarded. How was a planet created outside of an accretion disk? How does the object accelerate to cover a distance many times the distance of Pluto to the Sun in a few years? What force causes it to slow to a crawl when approaching the Sun when it should accelerate due to gravity? How can it supply its own light and heat light, like a brown dwarf, but its surface has liquid water suitable for life? How is it a rocky planet, yet has the a diameter only related to the gas planets? How do you account for an opposing gravitational mass to the Sun, although it is dark that is the counter for this sling orbit of the 12th? Now that the many how's have been addressed, lets apply the new concepts in gravitation, repulsion and particle crowding to explain the complex motion of the 12th Planet within the inner solar system and what to expect before it is ejected. What scientists will miss in studying its motion, is that the application of the present day laws of gravity will not provide all answers. So what factors need to be considered to approximate the basic parameters of its planetary motion and its affects on Earth? The 3 primary factors are magnetism, the repulsion force, and return particle flow towards the Sun. Secondary factors are oscillation above and below the ecliptic plane plus the variation of the distance from the Sun to the rear and the overall distance and compression of the planetary bodies blocking its forward path. Finally the buildup of repulsion particles that is responsible for orbital motion. Currently the 12th is oscillating above, below or at the ecliptic due to momentum conserved from its rapid entry into this solar system and slowing from the repulsion force to presently maintaining an almost static position. So what relationship does the 12th planet have with Earth, its orbital Twin, Venus, the Sun and the outlying planets? In this example we will minimize the effects of the Twin only because few on Earth can confirm its existence and any proof will not come from the establishment, due to national security. Although its presence shall be revealed in a dramatic fashion for all to see. As the 12th pierces the ecliptic plane of this solar system, there are many sub processes of motion that occur. Primarily, as the 12th rises above the magnetic neutral zone, the Sun's North magnetic pole attracts the southern pole of the 12th planet. Where the pull on the magnetic axis of the 12th planet is a function of the translational velocity from the neutral zone to an area that controlled by sub atomic particles related to the magnetic outflow of the Sun's north pole and the mean distance between both magnetic axis. This is what controls the intensity and suddenness of hose of magnetic subatomic particles directed towards Earth. These sudden extreme lurches will be responsible for the rouge tsunamis where evidence of an earthquake as the event trigger will be proven to be absent. In the beginning of this process as the 12th rises its south pole turns towards the Sun's north pole the distance closes at first, are small fluctuations. As the 12th is drawn towards the Sun above the ecliptic, the pressure of the return particle flow towards the Sun increases in density and velocity. Thus pushing the 12th back down towards the ecliptic, where the particle flow along the equator of the Sun is outward. The poles of the 12th reenter the magnetic neutral zone the poles realign with Sun. The repulsion force once compromised by the overwhelming magnetic attraction now becomes the dominant force as the 12th magnetic attraction is reduced as the south turns away. The repulsion forces away the 12th away from the Sun point where the repulsion force coming from the Earth blocks its forward progress to a reduced velocity hovering near what seems a static position or little movement. As the distance closes between the 12th and Earth, first as a function of the stored repulsion force, momentum pushes outward and the counter, magnetic attraction as the north pole of the 12th turns towards Earth and thus attracts the south pole of Earth. Secondary, the density of the repulsion field emanating from the 12th increases as a function of distance between the two, closes, eventually pushing the trapped planets closer to each other. The zone between the planets is reduced as the lateral repulsion field from the 12th increases in density and force creating an imbalance with the weaker repulsion force responsible for separation between the trapped planets resulting in a compression of that zone, thus the phrase, "the shrinking cup". This is aided, because the force responsible for orbital velocity emitted from the Sun to the rear of the planets has built up and exhibits little give. This push pull oscillation above and back to the ecliptic between the planets, 12th and The Sun continues for a time. Months, years no one born on Earth has that long term knowledge of a definitive time frame or ever will. There will come a moment in time where the planets can no longer back up. This is will be determined by the repulsion force emanating from the planets trapped in the cup. No longer shall the opposing repulsion force from the Sun driving orbital motion diffuse through, but now a concentrated cup of planets forced together, creates a barrier thus stops and backs up no more. Providing the only path of least resistance for the 12th, which is up away from the ecliptic plane into a thinner return particle flow towards the Sun. This time the 12th rises higher above the ecliptic and its north pole turns fully towards Earth as it south pole aligns the north pole of the Sun. The magnetic flow due to a closer proximity and ability to retreat closed, the north pole slowly turns away only due to distance. Momentum carries it pass a 180 degree rotation, but a dampening oscillation settles with the south pole of Earth settling and pointed at the north pole of the 12th. At this point the magnetic attraction closes the distance between the 12th and the Sun dragging along the Earth. As the repulsion force against the 12th builds from the Sun, an equilibrium is established by a temporary static position between the outward flowing repulsion force and the attraction of the magnetic poles. The Earth continues to close the gap even though the 12th has stopped. The line up causes an unusual eclipse in the southern hemisphere never recorded in public history. This is where, what is considered an unusual fuzzy spot, will be first mistaken as a very large sunspot begins to migrate into the center of the Sun. Then the spot expands uniformly, but the edge varies under the principles of particle movement related the Sun's return particle flow and its interaction with the atmospheric and dust cloud that surrounds the 12th until the Sun is eclipsed, but it does not stop there. Expansion shunts the Sun's corona enveloping all of the Earth in darkness. As the Earth closes in net distance, its repulsion force pushes the 12th deeper towards the Sun where the density of return flow increases. Over a period of 3 days this forces the 12th back towards the ecliptic. The 12th planet's poles as it drops back towards the ecliptic, due to a closer proximity to the Sun. Alignment occurs with its north to the 12th's south and the Sun's south pulls the north of the 12th. In the darkness the Earth mimics the alignment. As the 12th enters the zone of magnetic neutrality, it slowly aligns with the Sun in a north-north south-south pole relationship. Earth, further out away from the Sun is not forced in to the neutrality zone at the same rate as return particle flow and density towards the Sun is a function of distance above the ecliptic and the Sun. The line up ceases and Sunlight returns. The 12th now no longer has the magnetic force responsible for holding it close to the Sun. The repulsion force now greater than gravity flowing towards the Sun is seeking equilibrium, propels the 12th at a great velocity away from the Sun. The 12th's repulsion force in turn pushes the Earth. The surprise is 2 fold. Earth is still magnetically inverted to the Sun's magnetic poles as the subatomic flow forcing it back into the magnetic neutral zone takes several days, thus the legend of the sunrise in the west is realized. Venus and the twin were never pulled in but now, the Earth is pushed towards them a high velocity. The reflected light has its bandwidth shorten by the approaching velocity thus a shift towards the blue spectrum. Again the legend of the appearance of a Blue Star holds true. The surprise, it is Venus the cloud covered reflective planet, not the twin that is the source. The repulsion force from the cup slows and stops Earth and the 12th aided by repulsion streams emanating from the Sun driving orbital velocity. The Earth now experiencing mega disaster after disaster cares little about the heavens as the auroras reach extreme level as the magnetic fields and particle flow of Earth and the 12th are locked together. The final phase now begins with the 12th now further away from the Sun, its incoming particle return flow again is thin and easily penetrated. Again the cup now closer moving in as the 12th retreats towards the Sun. Aided by the repulsion force of the Sun, this pushes the 12th higher above the ecliptic as its only path. Again the 12th rolls to align with the Sun in a north pole 12th planet south pole relationship while being pulled towards the Sun. The Earth, never out of the magnetic grip is so close, the force when the 12th's north pole points towards us as it magnetically aligns its south pole to north pole of the Sun, is to much for the Earth. The crust separates and shifts about the core. Both planets are drawn closer to the Sun than the previous encounter. Incoming particle pressure quickly forces the 12th back towards the ecliptic it aligns with the Sun and is ejected at a great velocity. This the repulsion force of the planets trapped in the cup only deflect it upwards allowing the 12th to pierce the ecliptic and be ejected from the solar system. The Earth has little momentum to realign with the Sun having its crust separated from the core. So Earth tilt is a compromise between the magnetic poles aligning and the slip related to crust separation. With each passage in the past the tilt of the Earth pole position is the result of the 12th's passage and its intensity of the shift. What about the current explanation of the 23 degree tilt presented on the series Universe, The History Channel? Many questions were left unanswered and explanations were not consistent with current accepted theories. What would cause a planetary size object to collide with Earth if all large objects that coalesced within the accretion disk are spaced apart by Bode' law and motion in this system was initiated in the same direction? How would the Moon once in orbit be responsible for stabilizing Earth's wobble with a magnetic pole aligned to the Sun by the weak force of gravity emanating from the Moon? In fact it was stated the magnetic field was even stronger due to the the shorter rotational period. Does Venus with no moon wobble wildly? Doesn't Venus with out a moon and Mars with 2 very small moons almost have the same rotational period as the Earth, but it is stated that our Moon helped slow the rotational period of Earth? The theory presented explained the Moon was much closer to Earth, but you never showed how the Moon reduced orbital velocity as it moved away. Does this explanation adhere to your current planetary orbital theories or even consider conservation of energy due to the collision? The Moon after the collision would be reflected away as function of the original [mass*velocity] of the incoming object, energy loss to [mass Earth/mass Moon* v]. Would the Earth's gravity be able stop the Moon's escape after the bounce and if so then how did this momentum transfer to an orbital velocity countering Earth's gravitational pull precisely, thus stabilizing? In fact little of these explanations presented to the layman was correct. The program did present one true fact, the Moon did impact the Earth at one time. The Capture Theory All Rights Reserved: © Copyright 2009
Come up with questions about a topic and learn new vocabulary words to determine answers using an Ask, Answer, Learn table. Using data from the 1906 San Francisco earthquake, Harry Fielding Reed devised Elastic Rebound theory. This theory explains how stresses build up to eventually rupture in a large earthquake. Reviews the massive Good Friday earthquake of 1964 in Alaska led to the development of modern seismology. Learn how quakes happen on the Moon and Mars, like earthquakes but with different causes. Animation on how earthquake waves move around (and through) the Earth.
Assessments and Resources DIBELS Next (Dynamic Indicators of Basic Early Literacy) is a standardized test to measure a student’s early literacy skills. The test is given individually three times during the year. These are short, one minute tests that monitor your child’s reading skills. DIBELS Next includes: - First Sound Fluency measure how well your child can identify pictures that begin with a certain sound. - Letter Naming Fluency measures how many letters your child can identify in one minute. - Phoneme Segmentation Fluency measures how well your child can say the individual sounds in a given word. - Nonsense Word Fluency measures how well your child can read nonsense words that follow the phonemic pattern of actual words. - Oral Reading Fluency measures how many words your child can read in a minute. Students are given a paragraph to read. Running records are a form of assessment where teachers are able to listen to students reading and record their reading behaviors. The students are reading unfamiliar books so that we have a better idea of what strategies the child is using. This helps guide our instruction and allows the classroom teacher to make appropriate book choices. Running records are administered 3-4 times a year. Your child may not be assessed each time depending on his/her reading level. The Title 1 reading teachers along with the classroom teachers are responsible for administering the running records assessments.
What is it? Apraxia of speech is a motor speech disorder caused by a disruption between the planning of muscle coordination in brain and the body parts needed for speech (e.g., lips, tongue, jaw). It is not due to muscle weakness or paralysis. A child with apraxia of speech knows what he wants to say, but their brain has difficulty coordinating the oral movements needed to produce and combine sounds to form syllables and words. What does it looks like? Childhood apraxia of speech can look different in each child. Not every child show all of the signs and symptoms of apraxia. The following is a list of potential indicators that your child may have apraxia of speech: - Little to no cooing or babbling as an infant - Limited imitation of syllables and/or words - First words occurring after 18 months of age - A two-year old who: o is non-verbal o uses non-speech sounds without any word approximations o uses gestures, rather than words, to communicate o becomes frustrated around communication - A child who is able to produce single words clearly, though becomes unintelligible in phrases or sentences - A child who deletes sounds from words after age three - A child who has previously said a word clearly, though cannot imitate it when asked - Family members often have to interpret for the child How is it diagnosed? An audiologist should complete a comprehensive hearing evaluation to rule out any potential hearing loss. A certified speech-language pathologist will complete a comprehensive speech-language evaluation. This will assess your child’s oral-motor abilities, speech sound development, and language development. Childhood apraxia of speech is a differential diagnosis, or a diagnosis that is made by examining all the possible causes for a set of symptoms in order to arrive at a conclusion. Due to this, an official diagnosis of apraxia may not be made right away. It is important to rule out other potential causes for your child’s speech difficulties before coming the apraxia diagnosis, such as phonological disorders. However, it should be noted that with or without a diagnosis your child will still receive effective therapy to improve their overall communication skills. What treatments are available? Research has shown that frequent (3-5 times per week) and intensive speech-language therapy yields more successful results. Furthermore, individual therapy is more successful than group therapy for children with apraxia. Improvement in the planning, sequencing, and coordination of oral muscle movements is the main focus in intervention. Visual and tactile cues, such as tapping on the arm or looking in the mirror, provide multi-sensory feedback which helps to improve the child’s coordination and production. The most important piece in therapy for apraxia is practice; both in therapy and at home. The treatment of apraxia takes time, patience, and commitment. A supportive environment is crucial so your child can feel successful in their communicative interactions.
Production in the Short and Long Run The firm is the organisation within which the factors of production are combined to produce an output, which may be a good or service. A key distinction to make when describing what is produced is between goods and services: goods are tangible whereas services are intangible. The short run is where at least one factor of production is fixed [usually capital] whereas the long run is where all factors of production can be varied. Diminishing Returns - The Relationship between Inputs and Output Total product (TP) is the total amount produced by all factors of production employed in a given time period. It is the output of a firm. The average product (AP) of labour is calculated using the following formula: AP = Q/L Marginal product (MP) is the change in total product resulting from the employment of one more or less in the amount of a variable input. Marginal product of labour (MPL) is the change in total product resulting from the employment of one more or less unit of labour. Diminising Returns The law, or hypothesis, of diminishing returns determines the shape of the MPL curve. The MPL curve is upward sloping up to the point of diminishing returns, but after that point, it is downward sloping. The law of diminishing returns states as increasing amounts of a variable factor are applied to a fixed quantity of another factor, after some point the additions to total product (MPL) will begin to fall. After this point, each additional variable unit adds less to total product than the previous unit. - Create more space for extra workers - Come up with better organisational structure
People are what make a society just like several bricks make up a wall. If one of these bricks is broken, there is a risk of having the entire wall fall off. Just like that, if people living in a society are not given equal rights, privileges, and resources, there are chances the society will cease to exist after a passage of time. This explanation helps us to define a widely-used term known as “social justice”. In simple terms, social justice means having even distribution of wealth and opportunities in a society so that there is no such concept as rich and poor, privileged and under-privileged, education and uneducated, master and slave, so on and so forth. If social justice is not practiced in a society, it gives rise to a number of social problems, which are not just limited to poverty. These include unemployment, disabilities, social immobility, etc. The Social Justice Strategy exists to allow everybody belonging to a society to gain access to the state’s resources, be able to work and earn even with a disability, and get an equal chance to climb up the social ladder regardless of their background, religion, creed, language, age, and gender. The Scope of Principles There are many areas that need to be addressed in UK and all over the world when it comes to preventing social problems, providing unconditional support to promote work, and encouraging intervention at an individual level so that problems can be tackled by finding their root cause. It is mandatory to define the scale of the challenge by finding out how many people are suffering from one or more of the following social problems: – Unemployment: Even in developed countries like UK, joblessness prevails. According to Social Justice Transforming Lives, one in five of all households in UK are jobless, which means none of the members living there is working or earning. – Family matters: Divorce and separation as well as illegitimate children being raised by a single parent are problems that make children fall behind in many aspects of life. – Education: Differentiating pupils in schools coming from different backgrounds, such as a low-income household, creates gaps that travel with the person no matter how wealthy he becomes in the future. People with an exclusion from the school background are more likely to turn into criminals when they grow up. – Drug dependency: Regular use of cocaine, alcohol and heroine makes a person dependent on these entities and these people are 17 times more likely to die prematurely. – Debt: It has been estimated that around 10% of all households in UK are unable to meet their monthly financial commitments. This gives rise to another social problem; using illegal money lenders to keep up with the financial demands. When is Inequality Unjust? Some may argue that the success of any society lies in variation. Having people who are more skillful than others, more intelligent than their peers, or smarter than their elders is what makes a society function better. No doubt it’s true. However, inequality becomes unjust when rights and opportunities are not evenly distributed due to some underlying factors that creates a gap and makes an individual go behind in the race of life. It’s good to offer a prize to motivate people in achieving a goal, but it’s not good to offer that prize only to a select number of people. In a society, three different types of hierarchies exist when it comes to unjust inequality; – Inequality of standing – Inequality of power – Inequality of esteem The Relevance of Human Rights Are human rights relevant to the concept of social justice? In many societies around the world, human rights exist only as legal entities working to protect individuals from becoming victims of certain social crimes. However, economic inequality that relates directly to social justice goes deeper than just granting human rights to people living in a society. So, human rights are relevant but they are not the only instruments that can bring complete social justice and eliminate all social problems. We should help with medical donations and Hospitap charities To insure many people live in poor conditions around the world we think its justice if they have access to safe medical services and hospitals that can help with standard situations.
In general, qualitative research is a method that emphasizes aspects of a deeper understanding of a problem rather than looking at a problem. Qualitative research is a study that is descriptive in nature, tends to use analysis and more reveals the process of its meaning. Difference between Qualitative and Quantitative Research The difference between qualitative and quantitative research can be seen in several aspects. You need to know that the two research methods or approaches are not always in conflict with each other. There are also some things that have similarities or similarities. 1. Research Design - Qualitative is general, flexible, and dynamic. Qualitative research can develop during the research process. - Quantitative has special, detailed, and static properties. The flow of quantitative research itself has been planned from the beginning and cannot be changed anymore. 2. Data Analysis - Qualitative can be analyzed throughout the research process. - Quantitative can be analyzed at the final stage before the report. 3. Research Subject Terms - Qualitative has research subject commonly referred to as a resource. - Quantitative research subjects are commonly referred to as respondents. 4. Ways to Look at Facts - Qualitative: Qualitative research views “Facts / Truth” depending on the way the researcher interprets the data. This is because there are complex things that cannot be merely explained by numbers, such as human feelings, bola 88 etc. Quantitative research departs from data which is then explained by theories that are considered relevant, to produce a theory that reinforces existing theories. - Quantitative: Quantitative research sees “Facts / Truth” as objects of research out there. Researchers must be neutral and impartial. Whatever is found in the field, that’s a fact. Quantitative research departs from theory to data. 5. Data Collection - Qualitative: Qualitative research is more focused on something that cannot be measured by black and white truth so that in qualitative research researchers dig deep into the data for certain things. Thus, the quality of qualitative research is not very much determined by the number of sources involved, but rather how deep researchers explore specific information from selected sources. - Quantitative: Data collection is carried out using a series of research instruments in the form of tests/questionnaires. The collected data is then converted using predefined categories/criteria. The quality of quantitative research is determined by the number of research respondents involved. 6. Data Representation - Qualitative: The results of qualitative research in the form of researchers’ interpretation of a phenomenon, so that research reports will contain more descriptions. - Quantitative: Quantitative research results are presented in the form of mathematical calculation results. The calculation results are considered as confirmed facts. The validity of quantitative research is largely determined by the validity and reliability of the instruments used. 7. Implications of Research Results - Qualitative: The results of qualitative research have limited implications for certain situations. Thus, qualitative research results cannot be generalized in different settings. - Quantitative: The results of quantitative research in the form of facts/theories that generally apply (generalized). Whenever and wherever that fact applies. 8. Types of Methods - Qualitative: Phenomenology, ethnography, case studies, historical, grounded theory. - Quantitative: Experiments, surveys, correlations, regression, path analysis, export facto.
Looking remarkably similar to an array of solar panels, the device, constructed from inexpensive materials, is tuned to collect microwave signals, used by many household devices and gadgets, which would otherwise disappear into the ether, converting them into usable electricity. The power-harvesting device has been perfected by researchers at the Pratt School of Engineering at Duke University , Durham, North Carolina. Their application has implications for how the likes of cellphones or e-readers are recharged. Ultimately, rolled out on a larger scale, it might be applied to reducing household energy bills and, as a consequence, reducing carbon emissions. In its prototype form, the device wirelessly converts stray microwave signals to direct current voltage sufficient to recharge a cellphone battery. Looking very similar to solar panels, which work by converting light energy into electricity, the energy harvester, say the Pratt School researchers, could be adapted so that it tunes in and collects a variety of signals, be they satellite, Wi-Fi or even sound waves, converting them into usable power. The researchers highlight the use of metamaterials as the key to the functioning of their energy gatherer. Metamaterials don’t occur naturally. Instead, they’re artificial materials engineered so that their geometrical configuration, patterns on a microscopic scale, can affect light-waves, sound-waves and even microwave radiation. It’s by developing these properties that the North Carolina researchers have produced a workable “charger” whose output compares favorably with phone chargers in everyday use. Unlike conventional phone chargers, however, which continue to rack up electricity bills even when plugged in and not connected to a phone, the new device won’t fill power companies’ coffers at precisely zero benefit to the consumer. Allen Hawkes, an undergraduate engineering student, conceived the microwave harvesting device, working with graduate student Alexander Katko and lead investigator Steven Cummer, professor of electrical and computer engineering. Their prototype consists of series of five fiberglass and copper energy conductors wired together on a circuit board configured in such a way that it converts unseen electro-magnetic radiation such as microwaves into a usable 7.3 volts of electricity. By comparison, a typical Universal Serial Bus (USB) charger used to charge small electronic devices such as e-readers, provides about 5 volts of power. Through careful design the researchers have achieved energy efficiency of 37 percent, comparable to now-familiar solar cells. Commenting on the research, Alexander Katko said, “It's possible to use this design for a lot of different frequencies and types of energy, including vibration and sound energy harvesting." "Until now, a lot of work with metamaterials has been theoretical. We are showing that with a little work, these materials can be useful for consumer applications." In its most everyday form, the application could be used to develop cell phones which incorporate metamaterial panels, continually charging on the move and picking up free electromagnetic radiation from the atmosphere. More ambitiously, say the researchers, metamaterials might be used to coat the ceiling of a room and scavenge power from domestic Wi-Fi signals that would otherwise be lost. And it may not be a case of a lot of work to do away with a single phone charger. According to Professor Cummer, the design of the electromagnetic harvester is scalable. As Cummer put it, "Our work demonstrates a simple and inexpensive approach to electromagnetic power harvesting. The beauty of the design is that the basic building blocks are self-contained and additive. One can simply assemble more blocks to increase the scavenged power." That drawer full of chargers, some old, some new and some of unknown provenance but kept “just in case” could soon be history. Full details of the Pratt School of Engineering research are published under the title “A microwave metamaterial with integrated power harvesting functionality” in the journal Applied Physics Letters Their research was supported by a Multidisciplinary University Research Initiative from the US Army Research Office.
The original problem that Fibonacci investigated (in the year 1202) was about how fast rabbits could breed in ideal circumstances. Suppose a newly-born pair of rabbits, one male, one female, are put in a field. Rabbits are able to mate at the age of one month so that at the end of its second month a female can produce another pair of rabbits. Suppose that our rabbits never dieand that the female always produces one new pair (one male, one female) every month from the second month on. The puzzle that Fibonacci posed was... How many pairs will there be in one year? 1. At the end of the first month, they mate, but there is still one only 1 pair. 2. At the end of the second month the female produces a new pair, so now there are 2 pairs of rabbits in the field. 3. At the end of the third month, the original female produces a second pair, making 3 pairs in all in the field. 4. At the end of the fourth month, the original female has produced yet another new pair, the female born two months ago produces her first pair also, making 5 pairs. The number of pairs of rabbits in the field at the start of each month is 1, 1, 2, 3, 5, 8, 13, 21, 34, ... Another view of the Rabbit's Family Tree: | | | Both diagrams above represent the same information. Rabbits have been numbered to enable comparisons and to count them, as follows: * All the rabbits born in the same month are of the same generation and are on the same level in the tree. * The rabbits have been uniquely numbered so that in the same generation the new rabbits are numbered in the order of their parent's number. Thus 5, 6 and 7 are the children of 0, 1 and 2 respectively. * The rabbits labelled with a Fibonacci number are the children of the original rabbit (0) at the top of the tree. * There are a Fibonacci number of new rabbits in each generation, marked with a dot. * There are a Fibonacci number of rabbits in total from the top down to any single generation. The English puzzlist, Henry E Dudeney (1857 - 1930, pronounced Dude-knee) wrote several excellent books of puzzles (see after this section). In one of them he adapts Fibonacci's Rabbits to cows, making the problem more realistic in the way we observed above. He gets round the problems by noticing that really, it is only the females that are interesting - er - I mean the number of females! He changes months into years and rabbits into bulls (male) and cows (females) in problem 175 in his book 536 puzzles and Curious Problems (1967, Souvenir press): If a cow produces its first she-calf at age two years and after that produces another single she-calf every year, how many she-calves are there after 12 years, assuming none die? This is a better simplification of the problem and quite realistic now. But Fibonacci does what mathematicians often do at first, simplify the problem and see what happens - and the series bearing his name does have lots of other interesting and practical applications as we see later. So let's look at another real-life situation that is exactly modelled by Fibonacci's series - honeybees. Honeybees and Family trees There are over 30,000 species of bees and in most of them the bees live solitary lives. The one most of us know best is the honeybee and it, unusually, lives in a colony called a hive and they have an unusual Family Tree. In fact, there are many unusual features of honeybees and in this section we will show how the Fibonacci numbers count a honeybee's ancestors (in this section a "bee" will mean a "honeybee"). First, some unusual facts about honeybees such as: not all of them have two parents! In a colony of honeybees there is one special female called the queen. There are many worker bees who are female too but unlike the queen bee, they produce no eggs. There are some drone bees who are male and do no work. Males are produced by the queen's unfertilised eggs, so male bees only have a mother but no...
Question posted by Yarn Xee from Singapore: Grade/Level: Primary 6 Question solved by Model Method : Half the number of people in a boat was female. There were an equal number of children and women in the boat. If there were 100 males and there were 10 more men than children, what percentage of the people were girls? Note: I suspect there was an error in the numbers given in the question as you would see below that we could not get nice whole number for the quantity of people in the question. Step 1: We can easily solve this question by the model method. Draw 2 equal long bars to represent the number of men and women respectively. Since there were 10 more men than women, we draw another box to the left of men to represent the 10 extra men. Step 2: Next, draw a box to the left of the men to represent the total number of boys and label it "B" to represent boys. Step 3: Draw another box to the left of women to represent the total number of girls. Since it was given in the question that half of the people in the boat were female, the other half must be males. So, there should be an equal number of males and females. Hence we draw the "girls" box to coincide with the end of the "boys" box thereby making the total of men and boys equal to the total of women and girls. Step 4: Since the length of the girls coincide with the length of the boys plus the box with "10", then the total number of girls must be equal to the total number of boys plus 10. Step 5: As the total number of women is equal to the total of all the boys and girls, the total number of women must be equal to 2 units of "B" and 1 unit of "10". Step 6: As the total number of men is equal to the total number of women plus 10, the unknown part of the men must also be equal to 2 units of "B" and 1 unit of "10". And since it was given in the question that there were 100 males and there were equal number of males and females, we can label the total of males and females to be 100 each. As mentioned earlier, I suspect there is an error in the question. That is why we get 36 2/3 girls and 26 2/3 boys (have to cut up some of the girls and boys ;p). If we were to change the total number of males to 110 (or 140 or 170, etc...), we would be able to get nice whole numbers (no need to cut up anyone :) ). Go To Top - Model Method - Questions and Answers
A team of astronomers from the UK, Germany and Spain have observed the remnant of a stellar collision and discovered that its brightness varies in a way not seen before on this rare type of star. By analysing the patterns in these brightness variations, astronomers will learn what really happens when stars collide. This discovery will be published in the 27 June 2013 issue of the journal Nature. Stars like our Sun expand and cool to become red giant stars when the hydrogen that fuels the nuclear fusion in their cores starts to run out. Many stars are born in binary systems so an expanding red giant star will sometimes collide with an orbiting companion star. As much as 90% of the red giant star's mass can be stripped off in a stellar collision, but the details of this process are not well understood. Only a few stars that have recently emerged from a stellar collision are known, so it has been difficult to study the connection between stellar collisions and the various exotic stellar systems they produce. When an eclipsing binary system containing one such star turned up as a by-product of a search for extrasolar planets, Dr Pierre Maxted and his colleagues decided to use the high-speed camera ULTRACAM to study the eclipses of the star in detail. These new high-speed brightness measurements show that the remnant of the stripped red giant is a new type of pulsating star. Many stars, including our own Sun, vary in brightness because of pulsations caused by sound waves bouncing around inside the star. For both the Sun and the new variable star, each pulsation cycle takes about 5 minutes. These pulsations can be used to study the properties of a star below its visible surface. Computer models produced by the discovery team show that the sound waves probe all the way to the centre of the new pulsating star. Further observations of this star are now planned to work out how long it will be before the star starts to cool and fade to produce a stellar corpse ("white dwarf'") of abnormally low mass. Dr Pierre Maxted from Keele University, who led the study, said "We have been able to find out a lot about these stars, such as how much they weigh, because they are in a binary system. This will really help us to interpret the pulsation signal and so figure out how these stars survived the collision and what will become of them over the next few billion years." Cite This Page:
OTHER TICK-BORNE DISEASES Relapsing fever, as its name implies, is an illness characterized primarily by recurrent episodes of fever, often accompanied by fatigue, malaise and other constitutional symptoms. It is caused by at least 15 spirochete species belonging to the genus Borrelia, and can be vectored to humans by either lice or ticks. Louse-borne relapsing fever (LBRF) is caused by Borrelia recurrentis and is transmitted from human to human by the body louse, Pediculus humanus humanus. The pathogen multiplies in the gut of the louse and is transmitted when an infected louse is crushed or scratched while feeding on a human host. No human skin wound or scratch is necessary for inoculation to occur. Uninfected lice acquire the bacterium when feeding on infected humans; no other animal serves as a reservoir for B. recurrentis. LBRF tends to occur in epidemic waves, usually in times of human crisis such as war, deep poverty and/or overcrowding. Epidemics of LBRF followed both World War I and World War II, resulting in over a million deaths. It is now primarily a disease of the developing world, with foci in East Africa, South America and parts of China. It is not endemic in the United States. More than a dozen species of Borrelia have been implicated in tick-borne relapsing fever (TBRF), which is transmitted by soft-bodied ticks from the genus Ornithodoros. Each TBRF Borrelia species is adapted to a specific tick vector; in turn, each tick species has a preferred set of hosts, mostly small mammals such as mice, squirrels or chipmunks, which serve as natural reservoirs for the spirochetes. Ornithodoros ticks feed for short periods, on the order of half an hour at most, and tend to take their meals at night. Their bites are painless; thus, most humans are infected while asleep and have no recollection of being bitten. Tick-borne relapsing fever is endemic primarily in Africa, Central Asia, the Mediterranean, and Central and South America, but cases occur in western parts of the United States and southern British Columbia as well. Only 450 cases were reported in the US from 1977 to 2000, but national reporting is passive rather than mandatory and the true incidence of the disease is probably higher. The states with the largest number of reported cases during this period were California, Colorado, Washington, Idaho and Oregon. Most cases occurred in the summer months, from June through September. Signs and Symptoms The incubation period for relapsing fever is usually around a week. Symptom onset is abrupt, and consists primarily of episodic febrile events, commonly lasting a few days, followed by slightly longer periods of resolution. If untreated, the cycle usually reoccurs; more than 10 cycles have been reported in untreated patients. (TBRF generally involves more relapses than the louse-borne variant of the disease.) Additional signs and symptoms are non-specific but numerous, and include headache, myalgias, arthralgias, nausea, vomiting, anorexia, conjunctivitis and dry cough. Fever cycles often conclude in a classic pattern commonly referred to as a “crisis.” First the patient experiences a spike in fever, sometimes up to 106ºF or more, and an increased metabolic rate (such as rapid breathing and tachycardia) is seen. Shortly thereafter, body temperature falls dramatically and the patient endures drenching sweats. Severe drops in blood pressure can occur during this second stage. These cycles are caused by the ability of Borrelia spirochetes to shift their outer surface protein coat in order to evade the human immune response; once a new clone is created and the organism multiplies in sufficient numbers, clinical relapses occur. Liver and spleen involvement are not uncommon in relapsing fever, but seem to occur more frequently in LBRF. Neurologic complications can also occur – again, more commonly in LBRF – and include meningitis, seizures, cranial neuropathies (especially facial palsy) and even coma. Myocarditis can be a fatal complication of either LBRF or TBRF. Relapsing fever can also cause complications in pregnant women, resulting in spontaneous abortion, premature birth or neonatal death. A pattern of recurrent fevers in a patient from an endemic area should prompt an evaluation for relapsing fever. Conventional blood tests may show an increased white blood cell count, low platelets, mildly increased bilirubin, elevated erythrocyte sedimentation rate, and an increase in prothrombin time (PT) and partial thromboplastin time (PTT) coagulation tests, but none of these are diagnostic. Serologic tests (direct and indirect immunofluorescent assays) for relapsing fever can be performed, but they are not standardized across laboratories and not useful for timely diagnosis in any case. Cross reaction with antibodies to Lyme disease and syphilis have also been reported. PCR tests exist but are not widely available. The gold standard for relapsing fever diagnosis is the visualization of spirochetes in smears of peripheral blood or cerebrospinal fluid. Dark field microscopy is the preferred method, but various stains (such as Wright-Giemsa or acridine orange) are also frequently employed. The number of circulating spirochetes tends to decrease with each febrile episode. Louse-borne relapsing fever is usually treated with a single dose of antibiotic. Antipyretics (aspirin, NSAIDs or acetaminophen) are usually administered concomitantly. First line antibiotic agents are doxycycline and erythromycin, but chloramphenicol and parenteral penicillin G are also used. Treatment for tick-borne relapsing fever utilizes the same antibiotics, but lasts longer, typically one week. A common regimen is 100 mg of doxycycline every 12 hours, or 500 mg of erythromycin every 6 hours, for one week. Intravenous penicillin is recommended in cases of suspected or proven central nervous system involvement. A common and potentially serious complication of relapsing fever treatment is the Jarisch-Herxheimer reaction, caused by the massive release of cytokines (primarily TNF-alpha, IL-6 and IL-8) during the spirochete die-off. The reaction usually begins 2-4 hours after antibiotic administration and is similar to the crisis stage of the fever cycle. Typical presentations are elevated fever, increased respiration and heart rate, excessive sweating, chills, and sudden changes in blood pressure. Fatalities from the J-H reaction can occur. Research suggests that administration of anti-TNF-alpha antibodies can ameliorate the severity of the J-H reaction, but aspirin, acetaminophen and corticosteroids are ineffective in doing so. The mortality rate for untreated LBRF ranges widely but can approach 70%. In TBRF it is on the order of 5-10%. Treated properly, the death rate is reduced to around 1%, but TBRF patients often report residual symptoms even after treatment. These symptoms are usually associated with delayed diagnosis and initiation of treatment. Centers for Disease Control & Prevention, 2008. Dworkin MS, et al. Am. J. Trop. Med. Hyg. 2002; 66(6):753-58. Dworkin MS, et al. Infect Dis Clin North Am. 2008; 22(3):449-68. | Top |
Tick Diversity on Wild Rodents To some people ticks may all look the same, but for Vector Ecologists at the San Mateo County Mosquito and Vector Control District, knowing the small differences among species can help us learn things such as the potential for disease transmission and distribution. San Mateo County is home to a variety of tick species, many of which residents are unlikely to encounter because of their highly specific habitats. Lab employees however, have the opportunity to collect ticks directly off wild animals when they are captured for disease surveys, in addition to routine tick flagging. When identifying ticks, laboratory staff must use microscopes to look for a number of unique features that differ slightly among species. These key features may also vary depending on the life stage of the ticks (larva, nymph or adult). Below are a sample of the ticks of San Mateo County in the nymph stage that have been collected and photographed by laboratory staff. The adult form of these ticks are often found “questing” on grasses or low shrubs near the sides of trails while the nymphs often in leaf litter or on rocks and fallen logs. This species is considered the main vector of a number of diseases including Lyme Disease and Human Granulocytic Anaplasmosis, the two most common tick-borne diseases in our county. Though originally believed to be nidicolous (spending entire life around its host), some studies have shown that these ticks do exhibit some host seeking behavior. Though they may not encounter humans often, it is believed they play a role in disease cycles among rodents. The banana shape of their mouthparts, called palps, is the identifying characteristic of the larval and nymphal stages. A true nest-dwelling tick, Ixodes angustus prefer to feed on mice, voles, shrews, and rats. You can often find all life stages feeding simultaneously on the same host. SMCMVCD staff has collected them off of rodents in select coastal regions and on San Bruno Mountain. Larvae and nymphs have unique spurs on their palps. Both Dermacentor occidentalis and Dermacentor variabilis are present in San Mateo County and present throughout the height of summer. Ticks in this genus can spread diseases such as Rocky Mountain Spotted Fever (RMSF) and Tularemia, although they are rare in this county. Short mouthparts and ridges along the back of their abdomen called festoons help identify these species. A widely distributed tick that feeds almost exclusively on rabbits. The rabbit tick does not play a prominent role in human disease transmission, but some labs have found individuals infected with Rickettsia rickettsia, the causative agent of RMSF. Ticks in this genus have no eyes and must rely solely on their other senses when host seeking.
The western Antarctic Peninsula is one of the fastest warming regions on the planet, and the fastest warming part of the Southern Hemisphere. Scientists have debated the causes of this warming, particularly in light of recent instrumental records of both atmospheric and oceanic warming from the region. As the atmosphere and ocean warm, so the ice sheet (holding an equivalent of 5 metres of global sea level rise, locked up in ice) becomes vulnerable to collapse. Now research led by Cardiff University published in Nature Geosciencehas used a unique 12,000 year long record from microscopic marine algae fossils to trace glacial ice entering the ocean along the western Antarctic Peninsula. The study has found that the atmosphere had a more significant impact on warming along the western Antarctic Peninsula than oceanic circulation in the late Holocene (from 3500-250 years ago). This was not the case prior to 3500 years ago, and is not the case in the modern environment. The study has also shown that this late Holocene atmospheric warming was cyclic (400-500 year long cycles) and linked to the increasing strength of the El Niño – Southern Oscillation phenomenon (a climate pattern centred in the low latitude Pacific Ocean) demonstrating an equatorial influence on high latitude climate. Dr Jennifer Pike, School of Earth and Ocean Sciences said: "Our research is helping to understand the past dynamic behaviour of the Antarctic Peninsula Ice Sheet. The implications of our findings are that the modern observations of ocean-driven warming along the western Antarctic Peninsula need to be considered as part of a natural centennial timescale cycle of climate variability, and that in order to understand climate change along the Antarctic Peninsula, we need to understand the broader climate connections with the rest of the planet." Ice derived from land has a very distinctive ratio of oxygen isotopes. This research is the highest resolution application in coastal Antarctic marine sediments of a technique to measure the oxygen isotope ratios of microscopic marine algae fossils (diatom silica). When a large amount of glacial ice is discharged into the coastal ocean, this alters the oxygen isotope ratio of the sea water that the marine algae are living in. This creates a clear imprint in the fossils that reveals the environmental conditions of the time. The scientists used the oxygen isotope ratio of the fossils to reconstruct the amount of glacial ice entering the coastal ocean in the past 12,000 years, and to determine whether the variations in the amount of ice being discharged were the result of changes in the ocean or atmospheric environment. Professor Melanie Leng, from the British Geological Survey and Chair of Isotope Geosciences in the Department of Geology, University of Leicester, said: "Technologically the analysis of the oxygen isotope composition of diatom silica is extremely difficult, the British Geological Survey is one of a very few research organisations in the world that can undertake this type of analysis. For this research project the methodology has been developed over the last five years with the specific aim of investigating the different amounts of melting in the polar regions. It's fair to say we are world leading pioneers in this technique." Cardiff University: http://www.cardiff.ac.uk This press release was posted to serve as a topic for discussion. Please comment below. We try our best to only post press releases that are associated with peer reviewed scientific literature. Critical discussions of the research are appreciated. If you need help finding a link to the original article, please contact us on twitter or via e-mail. Institute for Highway Safety is known for crash-test safety ratings, but as cars get smarter there's a need to look beyond crashworthiness Researchers have long struggled to resolve what happens to information when it falls inside a black hole, but the famous physicist says he has a solution Researchers have been using muons to take a peek inside the nuclear reactors in Japan that melted down in 2011. The results could aid the continuing cleanup operations. Neutrinos, created by violent phenomena such as black holes and exploding stars, could hold the key to the universe’s most distant and mysterious events Better MRI scanners could result from a trick in which a magnetic field springs up from nowhere, using materials famous for their link to invisibility cloaks Water locked away in rocks for 1.5 billion years reveals conditions were right for complex organic molecules to form in deep sea hydrothermal vents Helium, used in nuclear, medical and, yes, party industries, has become scarce, but new research has revealed a possible way to pinpoint fresh sources New lab results show how collisions between comets and planets can make the molecules that are the essential building blocks of life. A startup company says it is expanding the language of DNA to create new tools for drug discovery. If scientists can convince people to use the app, they hope it will help them solve a cosmic mystery. This story originally aired on March 27, 2015 on All Things Considered.
Raptors are also known as “Bird of Prey”. They are not a natural animal family, but they are grouped together because they share a similar trait: hunting for prey with their talons. There are five families of Raptors: - Accipitridae: Hawks, Eagles, Buzzards, Harriers, Kites and Old World Vultures - Pandionidae: Osprey - Sagittariidae: Secretary-bird - Falconidae: Falcons, Caracaras and Forest Falcons - Strigiformes: Owls
Guest blogger Katie Cunningham is an Assistant Professor at Manhattanville College. Her teaching and scholarship centers around children’s literature, critical literacy, and supporting teachers to make their classrooms joyful and purposeful. Katie has presented at numerous national conferences and is the editor of The Language and Literacy Spectrum, New York Reading Association’s literacy journal. John Berger, in his famous documentary and book Ways of Seeing, explained that “Seeing comes before words. The child looks and recognizes before it can speak.” Visual literacies are, perhaps, the primary and first ways young children understand the world. Young children are not only visual readers of the world they are naturally close readers as well. They closely read people’s facial expressions. They read signs to orient themselves. They read new blades of grass, flakes of snow, and changes in leaves as signs of seasonal change. For young children, close reading and visual literacies are their pathways for understanding. Yet our capacities to closely read what we see should be valued and strengthened beyond early childhood. Society certainly thinks so. Instagram now has more than 100 million active users per month and is increasingly being taken up by teens and tweens as their site of choice over Facebook. Pinterest has more than 48.7 million users. Staggeringly, more than one billion unique users visit You Tube each month. Businesses today certainly recognize the power of visually-driven social media outlets as the primary way to reach potential clients. Yet too often the skill of closely reading what we experience visually is devalued in school over traditional print-based text. Are the Common Core State Standards (CCSS) repositioning the power of the visual as part of the definition of what it means to be an attentive reader today? One hopes so. The CCSS highlight close reading, in Reading Anchor Standard 1: “Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text,” but they also set forth expectations for visual literacy through Reading Anchor Standard 7: “Integrate and evaluate content presented in diverse media and formats, including visually and quantitatively, as well as in words.” Specifically, this strand begins in Kindergarten: “With prompting and support, describe the relationship between illustrations and the story in which they appear,” while fifth graders should be able to “analyze how visual and multimedia elements contribute to the meaning, tone, or beauty of a text.” Schools have an opportunity through the standards to support students to integrate, evaluate, and analyze visual texts to understand complex texts and ultimately to be producers, not just consumers, of visual and multimedia texts. So how do we support students to attend to the visual more closely? The Writing Center at Harvard University has its ideas about How to Do a Close Reading worthy of note for teachers of students at any age. The steps cited can be applied to the close reading of written texts but are particularly helpful for the close reading of visual texts. Step One Annotate the text: while our youngest learners are not annotating with pencil in hand, they are annotating out loud, sharing what strikes them as significant or surprising in both the written and visual components of text Step Two Look for patterns: repetitions, contradictions, and similarities. Step Three Ask questions: about the patterns you’ve noticed, especially how and why. I’ve added a fourth step. Step Four Put it all together: what do your observations and questions lead you to conclude? To illustrate the power of close reading of visual texts, I used the above guidelines with three diverse, visually striking Lee & Low books: In Daddy’s Arms I Am Tall: African Americans Celebrating Fathers, a collection of poetry with collage artwork; Surfer of the Century, an illustrated biography; and Yummy: The Last Days of a Southside Shorty, a graphic novel. Pages 1 and 2 of In Daddy’s Arms, I Am Tall: African Americans Celebrating Fathers, illustrated by Javaka Steptoe - Annotate: What strikes me is the size of the father’s feet, how they are lightly shaded white making them seem like traces left in the dust, and how the boy’s arms are spread out wide trying to follow in his father’s footsteps. - Notice Patterns: The footprints zigzag, forming a repetitive pattern for the boy to follow. - Ask Questions: I wonder why the boy’s eyes are closed? What is he thinking? I’m wondering where the father’s steps will lead. - Meaning: Following in your father’s footsteps can feel like the natural path you’re supposed to take but it’s not always easy. The footsteps are sometimes too big to fill. Find ways to steady yourself. Page 14 of Surfer of the Century by Ellie Crow, illustrated by Richard Waldrep - Annotate: What strikes me is how small Duke seems compared to the open water against the backdrop of the Honolulu wharf and how his swimming looks effortless. - Notice Patterns: The artist chose shades of blue and white to define the path of Duke’s wake in contrast to the glass-like surface of the water. - Ask Questions: I wonder what Duke thought as he reached the finish line and whether he knew he had won. I wonder why the spectators came out that day and what they thought of Duke’s record-breaking swim. - Meaning: Duke’s victory was a sight to behold because of the beauty of the place, the seemingly effortless beauty of his strokes, and the significance of his unexpected record-breaking time. Page 7 of Yummy: The Last Days of a Southside Shorty by G. Neri, illustrated by Randy DuBurke - Notice Patterns: The choice to illustrate the story in black and white leads to visual patterns across the page. The sequences of zooming in and pulling back help the reader follow Yummy’s story across the page but also allows us to tap into the complex ways he interacted with the neighborhood. - Ask Questions: I wonder what caused Yummy to follow in the gang members footsteps and how he felt when he projected hardened power. Did he wish he could simply be a young boy? What did he want? - Meaning: Yummy’s life was complex and wrapped up in the ways masculinity was defined for him. He liked normal things young boys do, such as candy, but he was positioned to follow models of what it meant to be a man in his neighborhood. The models he had equated power with projected toughness and violence. Of course, there are other ways of seeing, but this framework provides a roadmap for supporting students in an ongoing way to attend more closely to the visual. And remember to listen to what your students are saying and consider adding steps based on the ways they see. After all, they are the experts.
The Power Rule Lesson 5 of 13 Objective: SWBAT use the Power Rule to find the derivative function. My students always experience a feeling of great relief during today's lesson. Instead of always using the limit method for finding the derivative, they find a shortcut that will allow them to quickly figure out the derivative. I typically preface the lesson by saying that the work they put in today will save them tons of time in the future (this helps motivate students who may be feeling a little weary after the work of the last two days). I give students this worksheet and have them work on questions #1 and #2 with their table groups. Then we will share out with the entire class. Here are some things I am looking for as we discuss. - Question #1 - The derivative is the slope of the tangent line at a given point. It is found by choosing an arbitrary second point and then moving that point closer to the given point by using the limit as the distance goes to zero. It can get abstract at times, but students need to remember that a derivative is a slope. This becomes our mantra for this unit and I will make them repeat "a derivative is a slope" over and over. - Question #2 - The instantaneous slope of any point on the graph of f(x) = 4x is going to be 4, so the slope will be 4 for any x-value. Thus the derivative function is f'(x) = 4. Now that we have reviewed a little about derivatives, I want students to see if they can figure out the Power Rule for derivatives by themselves. I have them work on questions #3-5 of this worksheet. I tell students that they can use any method to fill out these charts - they may want to use the limit method for finding the derivative or they may want to just think it through like they did for question #1. Either way, I want them to communicate their method with the others in their table group. In my experience, students can easily reason to find the derivative of f(x) = x. Then they use the limit definition to find derivatives for f(x) = x^2 and f(x) = x^3. At this point they will notice a pattern and will fill in the rest of the chart without verifying the results. As I notice students doing this, I want them to focus on why their answers are correct and how they could justify the claims without going through the whole algebraic process. Once students have completed questions #3-5, I will randomly select table groups to share their results with the class. I really want to make sure that we think about why this rule works. I will usually select a few students to share their reasons and justify their claims to the rest of the class. Thinking about the definition of the derivative and considering the algebraic steps is usually very convincing to students. In the video below I outline a way to prove to your students that the Power Rule works. If this method does not come up in your class discussion, I would definitely have a conversation about it. For the cases of multiplying by a constant, i.e. f(x) = ax^n, you could use the same argument as in the video above but just factor the constant out of every single term. Next we fill in the chart on underneath question #5 to summarize the Power Rule. At this point, I think that it is imperative to list examples of when you can and cannot use the Power Rule. My students have a tendency to want to use it for any function that contains an exponent (such as g(x) = (2x + 4)^3, even though we did not prove that it could be done. I stress that we can only use the power rule for functions written in the form f(x) = ax^n + bx^m + cx^p +..., where each term has its own coefficient and exponent. For g(x) above, we would have to expand (2x + 4)^3 before taking the derivative. For homework, students will work on questions #6-9 to finish up the worksheet. This will give them some more practice with the shortcut for finding the derivative and get them thinking about the equation of a tangent line.
Vermont was the first state to define an elevated blood lead level as 5 µg/dL or more.About 70% of homes in Vermont were built before 1978, the year lead in house paint was banned. Explore Vermont Data Lead is a highly toxic metal that has been commonly used in many household, industrial and automobile products—such as paint, solder, batteries, brass, car radiators, bullets, pottery, etc. Lead poisoning is a serious but preventable health problem. Too much lead in the body can cause serious and permanent health problems. Children and pregnant women are at special risk. The only way to find out if a child has been exposed to too much lead is by a blood test. A blood test measures the amount of lead in blood. Blood tests are commonly used to screen children for lead poisoning and can be easily done at a child’s regular checkup. Vermont law requires that all children be tested at ages 1 and 2. Vermont Tracking provides blood lead level data for young children in two overall categories: - Birth Cohort Data - Annual Data A birth cohort is a group of individuals born during the same period or year. For blood lead data in Tracking, the birth cohort is the number of children born in a particular calendar year who are then followed until they reach their third birthday. The 2000 birth cohort (children born in 2000) is the earliest lead data in Vermont Tracking. Data for this 2000 birth cohort are shown under the year 2003, which is the year these children turned 3. Tracking presents data for: - Number of Children Tested Before Age 3 - Percent of Children Tested Before Age 3 - Number of Tested Children with Elevated Blood Lead Levels - Percent of Tested Children with Elevated Blood Lead Levels - Number of Tested Children by Category of Blood Lead Test Results - Percent of Tested Children by Category of Blood Lead Test Results If a child has had more than one blood test before age 3, a process defined by the national tracking program determines which blood lead result is used for that child’s blood lead level. Elevated blood lead levels are shown by both Vermont’s definition (any test 5 micrograms per deciliter (µg/dL) of blood and greater) and CDC’s definition (confirmed tests 10 µg/dL of blood and greater). The most common way that children become lead poisoned in Vermont is from lead-based paint and dust in older homes. Vermont Tracking provides the number and percentage of homes built before 1950, between 1950 and 1979, and after 1979. Children who live in poverty are considered to be a population at higher risk for lead poisoning. Vermont Tracking provides data on the number and percentage of children younger than 5 years who are living in poverty.
Table of Contents Bilirubin is a yellowish compound that is produced in the catabolic pathway of the breakdown of heme in vertebrates. Catabolism of heme is needed to clear the body of the waste products due to the death of aging red blood cells. The structure of bilirubin consists of an open-chain of four pyrrole-like rings. Bilirubin is produced by the breakdown of the red blood cells in the body. It travels to the liver and is secreted to the bile duct. It is eliminated eventually from the body in the stool. A typical red blood cell has a lifespan of about 120 days. The hemoglobin present in the RBCs gets broken down into bilirubin and other substances. The immediate product of biliverdin reductase is unconjugated bilirubin that is accumulated in circulation. Due to its ability to cross the blood-brain barrier, it leads to conditions that can impair normal brain functioning. The conditions can be fatal if left untreated. Excessive levels of unconjugated bilirubin in the bloodstream can cause a condition called kernicterus. Accumulation of this substance in circulation gives a yellow discoloration of mucous membranes and skin which is known as jaundice. This condition is a common symptom of liver disease. Due to the toxicity of unconjugated bilirubin, the body must convert it to a less toxic conjugated bilirubin, a process that takes place in the liver. Cellular Heme Metabolism and Bilirubin Production Bilirubin is produced by a two-stage reaction that occurs in the reticuloendothelial system of cells. These cells include phagocytes, Kupffer cells of the liver, and cells in the bone marrow and spleen. The catabolism of heme proteins occurs in the microsomal fraction of the cell by heme oxygenase. The action of this enzyme is substrate-inducible, which means the heme serves both as the substrate and the cofactor for the reaction. The process begins with the breakdown of red blood cells that contain heme pigment, globin chains in association with iron pigments. After remaining in circulation for approximately 120 days, each of the individual components are broken down to various products. The amino acids from the protein component of hemoglobin are released from the structure and are catabolized or re-used for protein synthesis. The heme portion, on the other hand, undergoes degradation starting with a mixed-function oxidase reaction that causes an opening of the ring and the conversion of one of the methane bridge carbons to carbon monoxide. The next step in the process is the release of the iron from the resulting linear tetrapyrrole. The iron is then transported to storage pools in the bone marrow to be reused in erythrocyte production. This iron atom that reaches the heme oxygenase is usually oxidized to its hemin or ferric form. After the release of the iron atom and the ring-opening of the heme group, Biliverdin is produced. Biliverdin is a green tetrapyrrolic bile pigment that is responsible for a greenish color during bruises. The biliverdin is reduced by biliverdin reductase. Biliverdin reductase is an enzyme found in all tissues, especially in reticulo-macrophages of the liver and spleen. This enzyme is responsible for converting the biliverdin to bilirubin by reducing the double bond between the second and third pyrrole ring to form a single bond. The reduction process of biliverdin requires NADH or NADPH as a cofactor. The biliverdin reductase catalyzes the reaction through an overlap in the binding sites of the Lys18, Lys22, Lys179, Arg183, and Arg185 as key residues. These binding sites attach to the biliverdin molecule causing the dissociation of the biliverdin from the heme oxygenase, eventually leading to a reduction of the green biliverdin to the yellow bilirubin. In the uptake to the liver, bilirubin is taken up at the sinusoidal surface of liver cells by a facilitated transport system. Once the bilirubin molecule enters the cell, it is bound to the cytosolic proteins like glutathione S-transferase, also called ligandin, to prevent its re-entry to the bloodstream. In the liver, reacting it with two glucuronic acid molecules solubilizes bilirubin. This is then as for bilirubin not to persist in cells because of its highly non-polar nature. The conjugation with glucuronic acids converts bilirubin into a more polar molecule. The enzyme UDP-glucosyltransferase, a bilirubin-specific enzyme in the endoplasmic reticulum, catalyzes the step-wise transfer of two glucosyl moieties from a UDP-glucuronate to the bilirubin molecule. The solubilized bilirubin, bilirubin diglucuronide is then secreted to the bile and finally excreted via the intestine. Enterohepatic Circulation and Health Most of the soluble bilirubin (conjugated) is excreted via the bile into the intestines. Most of the bile acids are reabsorbed into circulation at the terminal ileum, whereas the conjugated bilirubin passes into the large intestines where colonic bacteria deconjugates it into urobilinogen. The product is later oxidized into stercobilin that gives feces its characteristic color and urobilinogen in urine. A trace of urobilinogen is reabsorbed to enter the hepatic circulation into the liver where it is excreted again via the intestines forming the enterohepatic circulation - Jaundice arises from excess bilirubin in the body (hyperbilirubinemia) - Yellowish skin, sclera of eye - Bilirubin degraded by light Many possible jaundice causes:
2 Evolution Definition of Evolution the process by which species arise and change over time.The idea that all organisms have descended from common ancestors 3 Evolution Charles Darwin Origin of Species, 1859 Decent with ModificationNatural SelectionSexual selection 4 Evidence of Evolution Fossil Record Taxonomy Comparative Anatomy tells the story of evolutionTaxonomyClassifying organisms into groupsComparative AnatomyHomologous structuresIntermediate formsVestigial structuresComparative EmbryologyEarly embryos of vertebrates are alikeMolecular Biology –DNA, proteinsmtDNA 5 FossilsAn organism becomes a fossil only if it dies under the right conditions.Majority of dead plants and animals are consumed by other organisms and leave no trace.Dead organisms only leave a trace or imprint if they are quickly buried in a bog or at the bottom of a lake/ocean. 6 Clues of Human Evolution Paleontologists find fossils 7 Clues of Human Evolution Remains of humans were preserved in fossils 8 Bog is a wetland low in nutrients, slightly acidic soil . MossyTollund Man. (Denmark)Europe Iron age 2,400 ya 9 Buried in mud or at the bottom of a body of water, the remains of the dead are protected from scavengers, erosion and decay.Become buried deeply in successive layers of mudOver time pressure from all these layers of sediment turns the deepest layers into sedimentary rockAfter millions of years geologic forces raise the rock into mountains, canyons and reveal fossils 10 Sedimentary RockGrand Canyon showing layers of rock. 11 TaxonomyAll organisms are grouped into hierarchies based on their relationships.Organisms of the same species can breed and produce viable offspringMajor taxonomic levels - KPCOFGS 12 Taxonomy Common Name: Human Chimpanzee Lion Kingdom Animalia Phylum ChordataClassMammaliaOrderPrimatesCarnivoraFamilyHominidaeFelidaeGenusHomoPanPantheraSpeciessapienstroglodytesleo 13 Taxon0my Binomial nomenclature Genus and species Homo sapiens. Homo means “self” or “same”, meaning “the same as me” — which, for you, means “human”. Sapiens means “wise”. Homo sapiens means “Wise human”. 14 Comparative AnatomySpecies descended from a common ancestor may evolve in different directions and still keep some of the same characteristicsEvolutionary scientists compare the body structures of organisms to find clues. 15 Comparative AnatomyHomologous Structures – similar structures in two or more species that give evidence of a common ancestor.Similar structureSame originDifferent in function 16 Comparative AnatomyCompare legs of Human and Ape 17 Comparative AnatomyIntermediate forms – successive changes in homologous bone structures provide evidence of evolution.Also called transitional forms 20 Comparative Anatomy Vestigial Structures A part of an organism with little or no function that reflects evolutionary history 21 Comparative Embryology Compare Embryo Development 22 Molecular Biology DNA, mtDNA, proteins 20 -20,000 genes in the Human GenomeMolecular geneticists have compared DNA sequences of Humans and chimpanzees98.8% identicalHumans are more closely related to African apes than Asian apes (Immunological protein analysis) 24 Mitochondrial DNA contains 37 genes Thirteen of these genes provide instructions for making enzymes involved in oxidative phosphorylation. Oxidative phosphorylation is a process that uses oxygen and simple sugars to create adenosine triphosphate (ATP), the cell's main energy source.The remaining genes provide instructions for making molecules called transfer RNA (tRNA) and ribosomal RNA (rRNA) 25 Mitochondrial DNAPassed from mother to all her childrenPoints of mutations are a clear . 30 Homework Look up Rift Valley Explain where it is Explain why it is important to the study of Human Evolution. 31 Becoming Human As we have evolved Humans have Larger brains Walk bipedallySparse body hairNonopposable toes and longer feetGrasping flexible thumbUnspecialized teethArching backs 32 Becoming Human Australopithucus afarensis Lucy Found in Ethiopia in 1974This the earliest species of Australopithecus, lived in about 4 million and 3 million years ago.brain was about the same size as chimps3 feet, 70 lbs 33 Becoming HumanThe researchers also found that the model of locomotion produced in their simulations closely matched a set of fossilized footprints thought to have been left by A. afarensis in Laetoli, Tanzania, some 3.6 million years ago. 34 Becoming HumanThey constructed the computer model using a fossilised A. afarensis skeleton known as "Lucy", recovered from Ethiopia in The researchers then added virtual muscle to their simulation and used genetic algorithms to "evolve" the optimal walking movement for the creature. 35 Becoming Human Australopithicus africanus lived perhaps from 3 million to 1 million years ago, and probably evolved from A. afarensishad a rounder skull and slightly larger brainTooth and jaw design suggest he chewed plant foods, but might also have scavenged meat from the remains of carnivores' kills. 36 Becoming HumanNeanderthal Man Fossils found in Germany 37 Becoming Human Homo neanderthalensis More Neandertal skeletons have been found than any other ancient human species. They lived in Europe and Southwest Asia from at least 130, ,000 years ago.May have evolved from Homo heidelbergensis in Southern Europe. 38 Becoming Human protruding jaw, receding forehead, and weak chin. The average Neanderthal brain was slightly larger than that of modern humans, but this is probably correlated with larger body size in general.May have been a different species than Homo sapiens. 39 Homo heidelbergensis (600,000 to 100,000 years ago) The skulls of this species share features with modern Homo sapiens.brain was smaller than most modern humans 40 Becoming Human Homo sapiens sapiens Cro magnum man Earliest modern man Lived 35,000 yaThe body heavy and solid , muscular.Straight forehead with slight browridgesCro-Magnons were the first humans to have a prominent chin. 42 Comparison of erectus, aferensis, neanderthal An evolutionary comparison (from left to right: Homo erectus, 1 million years old; Australopithecus afarensis, 2.5 million years old; Homo neanderthalensis, 100,000 – 32,000 years old)
Cat Scratch Disease Fact Sheet Minnesota Department of Health Revised August, 2010 What is Cat Scratch Disease? Cat Scratch Disease (CSD) is an uncommon infection caused by the bacteria Bartonella henselae. Generally people who get CSD are either bitten or scratched by a cat before they get sick. Most healthy people do not develop any symptoms, but those with a mild infection usually get better without any treatment. What are the symptoms of CSD? - swollen lymph nodes near the site of the bite or scratch - poor appetite - skin pustule at site of the bite or scratch; usually develops 1 to 2 weeks before lymph nodes begin to swell Symptoms usually begin 3 to 14 days after being bitten or scratched by an infected cat. People with a weakened immune system due to disease or medication are more likely to have complications from CSD. These complications are rare and include Parinaud’s oculoglandular syndrome, an eye infection that causes inflammation of the optic nerve and can lead to blindness, and bacillary angiomatosis, a systemic illness characterized by lesions on the skin, mucosal surfaces, liver, spleen and other organs. How is CSD treated? Antibiotics may be used to speed recovery in cases of acute or severe illness but most people do not require treatment. Recovery occurs spontaneously within 2 to 4 months. How is CSD diagnosed? A diagnosis is made based on appropriate exposure history, symptoms, and a blood test that can detect antibodies to B. henselae. Can I get CSD from my cat? Yes, it is possible to get CSD from your cat. Most people get CSD from cat bites or scratches. Kittens are more likely to be infected and therefore able to pass the bacteria to humans than adult cats. Cats are the natural reservoir for the bacteria that causes CSD, and generally do not show any signs of illness. Therefore it is impossible to know which cats can spread CSD to you. Fleas are responsible for transmitting B. henselae between cats and it is believed that transmission to humans occurs through contamination of bites or scratches with flea excrement. There is no human-to-human transmission of CSD. How can I reduce my chances of getting CSD from my cat? - Maintain excellent flea and tick control - Avoid rough play with cats - If you have an open wound do not allow a cat to lick it - Thoroughly wash the site of a bite or scratch with soap and water - Adopt or buy cats that are in good health and without fleas
This post was created by a member of Edutopia's community. If you have your own #eduawesome tips, strategies, and ideas for improving education, share them with us. 4 1307 Views I posted a free lesson activity to commemorate the 9/11 anniversary on the TCI Blog. The activity has students gather the memories of people who remember the day and turn it into word art. Students then immerse into a discussion on the essential question: How do people remember 9/11? You can download the lesson activity free here: http://bit.ly/qAOS9N
Many adverbs and adverbial expressions can go at the beginning of a clause. - Once upon a time there lived three little kittens. - One day they decided that they should explore the world. - Then they realized that they had made a mistake. Adverb particles are often fronted when giving instructions to small children. - Off we go! - Down you come! - In you go! - Out you come! Adverbs are also fronted for emphasis. - Now you ask me! (= Why didn’t you ask me before?) After some emphatic fronted adverbs and adverbial expressions, we use the inverted word order. That means the auxiliary verb comes before the subject. - Under no circumstances can we tolerate this. (NOT Under no circumstances we can tolerate this.) Structures with as or though Adjectives and adverbs are often fronted in expressions with as and though. - Clever as he was, he could not solve the problem. = Though he was clever, he could not solve the problem. In this structure as means though. Strong as he was, he could not beat his opponent. = Though he was strong he could not beat his opponent. - Tired though she was, she went on working. - Fast though she drove, she could not catch them. - Much as I respect him, I cannot agree with him.
Still Stalking Its Victims Tuberculosis, the disease you thought was history, continues to be a scourge across much of the world When we hear the word “tuberculosis,” we’re likely to think of a plague from the Middle Ages, or a poet dying of consumption in 19th-century Paris. Despite the perception that TB is a thing of the past, however, the contagious bacterial infection continues to have devastating consequences in much of the developing world. Tuberculosis now infects one-third of the entire human population, resulting in as many as 9 million new cases and 1.7 million deaths each year, according to the latest figures from the World Health Organization. Thankfully, 90 percent of people infected with the tuberculosis bacteria never contract the disease. Still, doctors and biomedical researchers are working to understand why some humans develop TB while others do not. “That’s the magic-bullet question,” says Gillian Beamer, an assistant professor of biomedical sciences at the Cummings School of Veterinary Medicine at Tufts. She is leading a research program to study the disease, focusing on two genetically different strains of mice—one that contracts TB when infected, the other that doesn’t. “I focus on how the [animals’] immune response fights the bacteria,” as well as the differences in TB-resistant and TB-susceptible mice, she says. “Someday, hopefully, the information can translate to humans.” The bacterium that causes the disease, Mycobacterium tuberculosis, spreads through the air and can infect any organ in the body, though it is most commonly found in the lungs. Once inside the body, it burrows deeps into cells, including the infection-fighting white blood cells. When TB infection occurs, nodules called tubercles or granulomas form. Patients lose weight and energy and develop a persistent cough—eventually coughing up bright red blood before succumbing to the disease. Tuberculosis has plagued humans since the Stone Age, and evidence of infection has been found in Egyptian mummies. In the Middle Ages it was known as the “white plague,” and in the 18th and 19th centuries it was the cause of a quarter of all deaths in the Western world. In the 1940s and ’50s, however, the availability of antibiotics and a new vaccine, effective in children, sharply curtailed TB outbreaks. Doctors and biomedical researchers are working to understand why some humans develop TB while others do not. “It was brought under control to a great extent and almost vanished from the population,” says Saul Tzipori, director of the infectious diseases program at the Cummings School. Assuming that TB was history, doctors stopped administering vaccines and declared victory. But when HIV/AIDS emerged in Africa in the 1980s, TB re-emerged right alongside it, preying on the weakened immune systems of those with the human immunodeficiency virus and acquired immune deficiency syndrome. “HIV/AIDS created a whole new pool of susceptible individuals,” says Tzipori, holder of the Agnes Varis University Chair in Science and Society at Tufts. “The microorganism stays dormant in most people, but flares up mostly because of a change in the immune status of the host.” Even that connection is unclear, however, because the majority of TB patients don’t have HIV. “Quite often there is no clear indication” for who develops TB, says Tzipori. Finding the Markers When the New England Regional Biosafety Laboratory opened on the Cummings School campus in 2009, studying TB became a priority. Tufts recruited Beamer, who did her doctoral research on the disease at Ohio State University, where she looked at the amounts of certain cell-produced proteins that seem to control whether mice are more susceptible to TB. One of them, called interferon gamma (IFNg), appears to help the body fight TB by “turning on” white blood cells to make them more powerful combatants. Another protein, called IL-10, seems to have the opposite effect, shutting down the immune system instead of stimulating it. The research may help identify who in a population is susceptible to TB and who isn’t. “A blood test could be developed to identify some kind of marker for protective immunity, and these are active areas of research by many investigators,” says Beamer. That could help physicians determine which patients would benefit from the antibiotics used to treat TB. The antibiotic course currently prescribed is extremely complex and expensive, consisting of multiple pills taken multiple times a day for six to nine months. Targeting their use to those most susceptible could help save resources, reduce side effects for patients and cut down on antibiotic-resistant strains of TB that recently have begun to emerge. Though humans are much more complex than the mice Beamer studies in her lab, it makes sense for a veterinary school to be conducting this research. “To me, humans are just one other animal,” says Beamer, who earned her V.M.D. at the University of Pennsylvania. “The specific protein reactions that happen in a cell can be different, but in a broad way, what happens on an animal level is very similar.” As a veterinarian with expertise in animal anatomy, physiology and cell biology, Beamer says she has “been trained to evaluate the entire animal, so I have a slightly different perspective than people who don’t have that training.” Beamer wants to expand her research to look more closely at what happens to individuals when they progress from a controlled infection to an active stage of TB. Her efforts could help in the global fight against the disease that includes attempts to develop a more effective vaccine, new antibiotics to target specific strains of TB and less-invasive diagnostic tests such as using spit instead of drawing blood. “Despite the many labs around the globe that are working on TB, there is still some basic information that is missing,” says Tzipori. “If we can generate a better understanding about what happens between the host and the bug, then we can devise better measures for control.” While those measures may be a ways off, the Tufts researchers are contributing to a multipronged attack designed to make TB history once again. Michael Blanding is a freelance writer based in Boston.
Students will learn how to “read” and understand editorial cartoons. Time Required: 20-minute class period with additional time for extension activity Setup: Explain the concept and purpose of an editorial cartoon to the class. Hand out “Think It Through” Activity 3 Printables (PDF) and ask students to study the editorial cartoon. Have a class discussion about the difference between the obvious and hidden messages of the cartoon. Activity: Separate students into groups of four. Have them work together to complete the activity sheet as a group. Each student should also provide their own response to the questions on their activity sheet. Wrap-up: Have students discuss how important it is to be able to identify both hidden and obvious media-related messages, whether in cartoons, on TV, or in music. Extension: Have students draw their own editorial cartoons about understanding media messages. Evaluation: Were students able to understand the difference between hidden and obvious messaging? Were they able to determine what ideas the cartoon was portraying? Did they understand the overall message of the cartoon?
Before New York City became the shining metropolis that we know today, with its glass and concrete skyscrapers and wide, shop-filled avenues, much of it was quiet farmland. In fact, prior to the mid-1800s, most of the area that would become New York City was all but undeveloped. Before European colonization, the area we now know as New York was inhabited by a number of Algonquian tribes living in small communities. Then, after the Dutch invaded in 1624 and drove out the Native Americans, the region was known as New Amsterdam and grew to the size of around 8,000 inhabitants. Next, New Amsterdam was seized by the British during the Third Anglo-Dutch War in 1674 and rechristened the colony of New York, after the Duke of York. Due to its role as a major trading port in the region, the colony of New York began to grow in this period. After the revolutionary war, New York only grew in prominence in what was now the fledgling United States. Nevertheless, the city still remained a largely undeveloped collection of farms, houses, and businesses. It wasn't until the 1830s and 1840s that New York truly began to build the recognizable foundation of the city that we know today. At that time, wealthy landowners began to move into the city and lobbied for the development of public works like parks and roads. At the same time, vast numbers of immigrants were flooding into the area. This wave included a vast number of Irish immigrants fleeing the Great Famine in their country, and many Germans fleeing revolutions in their country. Furthermore, New York became a free state in 1827, causing African-Americans from across the country to flood into New York. This mass of both laborers and wealthy elites laid the groundwork for the increased development of the city. Thus throughout the latter half of the 19th century, many people lived in farms and shanty towns as, slowly but surely, a major city formed around them. But this city was not yet a single community. In fact, up until 1898, Brooklyn, Queens, and the Bronx were all separate cities apart from New York. The images above show how New York looked before it became one, before it was developed, before it grew into the city we now know. From a conglomerate of pastoral towns to the gleaming city on the hill, New York's development is a sight to behold. Next, check out these 35 photos you won't believe were taken in New York City. Then, have a look at some more photos of old New York before the skyscrapers.
Korean Society Celebrations - Main Page In Korea, on the 100th day after a child's birth, a small feast is prepared to celebrate the child's having survived this difficult period. If the child is sick at this time, the family passes the day with neither announcement nor party, for to do otherwise is considered bad luck for the infant. At this time the samshin halmoni is honored with offerings of rice and soup in gratitude for having cared for the infant and the mother, and for having helped them live through a difficult period. The family, relatives and friends then celebrate with rice cakes, wine, and other delicacies such as red and black bean cakes sweetened with sugar or honey. To prevent potential harm to the child and to bring him or her good luck and happiness, red bean cakes are customarily placed at the four compass points within the house. If the steamed rice cakes are shared with 100 people, it is believed that the child will have a long life. Therefore, rice cakes are usually sent to as many people as possible to help celebrate the happiness of the occasion. Those who receive rice cakes return the vessels with skeins of thread, expressing the hope of longevity, and rice and money, symbolizing future wealth. Such customs are also part of the tol, or first birthday, celebration. Because of the high infant mortality rates in the past, this celebration is considered to be even more important. Like the 100th day celebration, it begins with offerings of rice and soup to the samshin halmoni. However, the highlight of this celebration is when the child symbolically foretells his or her own future. For this ritual, the child is dressed in new traditional Korean clothes. A male child wears the traditional hood worn by unmarried youths, and the female wears make-up. The child is seated before a table of various foods and objects such as thread, books, notebooks, brushes, ink and money which have all been given to the family by friends and relatives. The child is urged to pick up an object from the table, as it is believed the one selected first will foretell the child's future. If the child picks up a writing brush or book, for example, he is destined to be a scholar. If he picks up money or rice, he will be wealthy; cakes or other food, a government official; a sword or bow, a military commander. If the child picks up the thread, it is believed he will live a long life. This is followed by feasting, singing and playing with the toddler. Most often guests will present gifts of money, clothes, or gold rings to the parents for the child at this time. Upon departure, guests are given packages of rice cakes and other foods to take with them. This sharing of rice cakes is thought to bring the child long life and happiness. The hwan-gap, or 60th birthday, has also been considered an especially important birthday celebration, for this is the day when one has completed the zodiacal cycle. Even more important is the fact that, in the past, before the advent of modern medicine, few people lived to be 60 years old. hwan-gap was therefore a time of great celebration when children honor their parents with a large feast and much merrymaking. With the parents seated at the main banquet table, sons and daughters, in order of age, bow and offer wine to their parents. After the direct descendants have performed this ritual, the father's younger brothers and their sons and then younger friends pay their respects in the same manner. While these rituals are being carried out, traditional music is usually played and professional entertainers sing songs, urging people to drink. Family members and relatives indulge in various activities to make the parents feel young, often dressing like small children and dancing and singing songs. In the old days, guests would compete in composing poetry or songs in celebration of the occasion. In the past, years after the 60th birthday were regarded as extra years and although subsequent birthdays called for a celebration, it was not celebrated as lavishly as the hwan-gap party. Upon the 70th birthday, or kohCui meaning old and rare, another celebration equal in scale with the hwan-gap was celebrated. Although smaller in size and scope than the hwan-gap and the tol celebrations, the birthday celebration of each member of the family calls for ample food, wine and specially prepared delicacies. Back to Top Information provided by the Korean Embassy
What the heck is that giant exoplanet doing so far away from its star? Astronomers are still trying to figure out the curious case of HD 106906 b, a newly found gas giant that orbits at an astounding 650 astronomical units or Earth-sun distances from its host star. For comparison, that’s more than 20 times farther from its star than Neptune is from the sun. “This system is especially fascinating because no model of either planet or star formation fully explains what we see,” stated Vanessa Bailey, a graduate astronomy student at the University of Arizona who led the research. HD 106906 b is 11 times the size of Jupiter, throwing conventional planetary formation theory for a loop. Astronomers believe that planets gradually form from clumps of gas and dust that circle around young stars, but that process would take too long for this exoplanet to form — the system is just 13 million years old. (Our own planetary system is about 4.5 billion years old, by comparison.) Another theory is that if the disc collapses quickly, perhaps it could spawn a huge planet — but it’s improbable that there is enough mass in the system for that to happen. Perhaps, the team says, this system is like a “mini binary star system”, with HD 106906 b being more or less a failed star of some sort. Yet there is at least one problem with that theory as well; the mass ratio of the planet and star is something like 1 to 100, and usually these scenarios occur in ratios of 1 to 10 or less. “A binary star system can be formed when two adjacent clumps of gas collapse more or less independently to form stars, and these stars are close enough to each other to exert a mutual gravitation attraction and bind them together in an orbit,” Bailey stated. “It is possible that in the case of the HD 106906 system the star and planet collapsed independently from clumps of gas, but for some reason the planet’s progenitor clump was starved for material and never grew large enough to ignite and become a star.” Besides puzzling out how HD 106906 b came to be, astronomers are also interested in the system because they can clearly see leftovers or a debris disk from the system’s formation. By studying this system further, astronomers hope to figure out more about how young planets evolve. At 2,700 degrees Fahrenheit (1,500 degrees Celsius), the planet is most easily visible in infrared. The heat is from when the planet was first coalescing, astronomers said. The astronomers spotted the planet using the Magellan telescope at the European Southern Observatory’s Atacama Desert in Chile. It was visible in both the Magellan Adaptive Optics (MagAO) system and Clio2 thermal infrared camera on the telescope. The planet was confirmed using Hubble Space Telescope images from eight years ago, as well as the FIRE spectrograph on Magellan that revealed more about the planet’s “nature and composition”, a press release stated. The research paper is now available on the prepublishing site Arxiv and will be published in a future issue of Astrophysical Journal Letters. Source: University of Arizona
Think global warming might change things a little? You haven't seen anything compared to 50 million years ago. Though Antarctica is year-round one of the coldest places on Earth, and the continent's interior is the coldest place, with annual average land temperatures far below zero degrees Fahrenheit, during the Eocene epoch, 40-50 million years ago, there was a period with high concentrations of atmospheric CO2 and consequently a greenhouse climate. The CO2 was not the only culprit, there have been periods where CO2 was 10X what it is now without spiking the heat, but it sure didn't help things. It meant that some parts of ancient Antarctica were as warm as today's California coast, and polar regions of the southern Pacific Ocean registered 21st-century Florida heat. Writing in the Proceedings of the National Academy of Sciences, the authors say their new measurements, using a technique called carbonate clumped isotope thermometry, can help improve climate models used for predicting future climate. "Quantifying past temperatures helps us understand the sensitivity of the climate system to greenhouse gases, and especially the amplification of global warming in polar regions," says co-author Hagit Affek, associate professor of geology&geophysics at Yale. Peter M.J. Douglas, lead author and postdoctoral scholar at the California Institute of Technology, says that by measuring concentrations of rare isotopes in ancient fossil shells, they found that temperatures in parts of Antarctica reached as high as 17 degrees Celsius (63F) during the Eocene, with an average of 14 degrees Celsius (57F) — similar to the average annual temperature off the coast of California today. Eocene temperatures in parts of the southern Pacific Ocean measured 22 degrees Centigrade (or about 72F), researchers said — similar to seawater temperatures near Florida today. Today the average annual South Pacific sea temperature near Antarctica is about 0 degrees Celsius. These ancient ocean temperatures were not uniformly distributed throughout the Antarctic ocean regions — they were higher on the South Pacific side of Antarctica — and researchers say this finding suggests that ocean currents led to a temperature difference. "By measuring past temperatures in different parts of Antarctica, this study gives us a clearer perspective of just how warm Antarctica was when the Earth's atmosphere contained much more CO2 than it does today," said Douglas. "We now know that it was warm across the continent, but also that some parts were considerably warmer than others. This provides strong evidence that global warming is especially pronounced close to the Earth's poles. Warming in these regions has significant consequences for climate well beyond the high latitudes due to ocean circulation and melting of polar ice that leads to sea level rise." To determine the ancient temperatures, the scientists measured the abundance of two rare isotopes bound to each other in fossil bivalve shells collected by co-author Linda Ivany of Syracuse University at Seymour Island, a small island off the northeast side of the Antarctic Peninsula. The concentration of bonds between carbon-13 and oxygen-18 reflect the temperature in which the shells grew, the researchers said. They combined these results with other geo-thermometers and model simulations. "We managed to combine data from a variety of geochemical techniques on past environmental conditions with climate model simulations to learn something new about how the Earth's climate system works under conditions different from its current state," Affek said. "This combined result provides a fuller picture than either approach could on its own."
We’ve covered a number of studies on the slower rise of atmospheric warming in recent years. Those studies have mainly focused on the oceans, the Pacific Ocean in particular, where cool La Niña conditions have dominated due to stronger trade winds, resulting in greater heat uptake by deeper ocean water. But that’s not the only source of short-term climate variability to account for. Volcanic eruptions also affect temperatures, as they add tiny particles of sulfate (referred to as “aerosols”) to the atmosphere. These particles increase the amount of sunlight that gets reflected back into space, “shading” Earth’s surface. Big eruptions, like Mount Pinatubo in 1991, can lower average global temperatures for a couple years. Huge eruptions, of course, can have a more dramatic impact—like the “Year Without a Summer” in 1816. We haven’t had any major eruptions since 1991, but there have been plenty of smaller ones. Since 2000, there have been about 17 eruptions large enough to warrant consideration in the context of the global climate. Recently, a group led by Benjamin Santer at the Lawrence Livermore National Laboratory has examined satellite measurements of atmospheric aerosols in order to work out how much of an impact those eruptions have had on the climate. First, the researchers compared the aerosol measurements to satellite records of temperature and reflected solar radiation. After removing the influence of El Niño/La Niña conditions (which can influence reflected sunlight via cloud cover), they found (as expected) a good correlation with reflected solar radiation. The correlation with temperature was more complicated—lots of things affect the temperature—but a link was also apparent during periods of higher volcanic activity. That’s doubly interesting because recent volcanic activity wasn’t accounted for in the climate model simulations compiled for the latest Intergovernmental Panel on Climate Change report. The models used estimates of observed “forcings” (factors that push the climate toward warming or cooling) up to 2005, where the projections of future scenarios took over. The inputs used for the pre-2005 period included Pinatubo but dropped its sulfate contributions to negligible levels by 2000—and they didn't account for any eruptions after that. (There is a reason to ignore small eruptions. The model simulations are averaged together to represent long-term trends rather than short-term variability. Since the timing of volcanic eruptions is unpredictable and don't change the long-term trends, no attempt is made to anticipate volcanic activity.) Most of the 17 eruptions took place after 2005, meaning that the real world experienced a source of cooling that the simulated worlds in the models did not. The difference made by accounting for this isn’t trivial, but it isn’t huge, either. Several newer studies showed that adding the recent volcanic activity to the model simulations lowered global temperature about 0.02 to 0.07 °C by 2010. While this is a point of interest for climate scientists more than the average person, the researchers also point out that it illustrates one problem with simplistic claims that recent temperatures show CO2 doesn’t cause much warming. There are a number of sources of short-term variability to account for, both in the real world and in climate model simulations. In addition, the researchers say that volcanic aerosols are unlikely to be the only model inputs that could be improved for recent years—and our observations aren’t perfect to boot. A meaningful analysis takes much more than eyeballing a couple graphs. Things like small eruptions are the fine details that keep researchers hard at work and help climate modelers interpret their results. The public argument about climate change, on the other hand, often glosses over those details.
6 Students conduct cost-benefit analyses of historical and current events. 1 Students use map and globe skills to determine the absolute locations (latitude and longitude) of places, and they interpret information available through a map or globes legend, scale, and symbolic representations. 2 Students define common map and globe terms, including continent, country, mountain, valley, ocean, sea, lake, river; cardinal directions, latitude, longitude, north pole, south pole, tropics of Cancer and Capricorn, equator, 360-degree divisions, time zones; elevation, depth, approximate distances in miles, isthmus, strait, peninsula, island, archipelago, 23-and-a-half-degree global tilt, fall line; and compass rose, scale, and legend. 3 Students judge the significance of the relative location of a place (e.g., proximity to a harbor, on trade routes), and they analyze how relative advantages or disadvantages can change over time. 4 Students identify the human and physical characteristics of the places they are studying, and they explain how those features form the unique character of those places. 4.1 Students describe the different peoples, with different languages and ways of life, that eventually spread out over the North and South American continents and the Caribbean Basin, from Asia to North America (the Bering Strait) (e.g., Inuits, Anasazi, Mound Builders, and the Caribs). 4.2 Students describe the legacy and cultures of the major indigenous settlements, including the cliff dwellers and pueblo people of the desert Southwest, the triple alliance empire of the Yucatan Peninsula, the nomadic nations of the Great Plains, and the woodland peoples east of the Mississippi. 1 Identify how geography and climate influenced the way various nations lived and adjusted to the natural environment, including locations of villages, the distinct structures that they built, and how they obtained food, clothing, tools, and utensils. 2 Describe systems of government, particularly those with tribal constitutions, and their relationship to federal and state governments. 3 Describe religious beliefs, customs, and various folklore traditions. 4 Explain their varied economies and trade networks. Age of Exploration (15th 16th Centuries) 4.3 Students trace the routes of early explorers and describe the early explorations of the Americas. 1 Compare maps of the modern world with historical maps of the world before the Age of Exploration. 2 Locate and explain the routes of the major land explorers of the United States, the distances traveled by explorers, and the Atlantic trade routes that linked Africa, the West Indies, the British colonies, and Europe. 3 Locate the North, Central, Caribbean, and South American land claimed by European countries. 4 Describe the aims, obstacles, and accomplishments of the explorers, sponsors, and leaders of key European expeditions and the reasons Europeans chose to explore and colonize the world (e.g., the Spanish Reconquista, the Protestant Reformation, and the Counter-Reformation). 5 Identify the entrepreneurial characteristics of early explorers (e.g., Christopher Columbus, Francisco Vásquez de Coronado) and the technological developments that made sea exploration by latitude and longitude possible, including the exchange of technology and ideas with Asia and Africa. 6 Analyze the impact of exploration and settlement on the indigenous peoples and the environment (e.g., military campaigns, spread of disease, and European agricultural practices). 4.4 Students identify the six different countries (France, Spain, Portugal, England, Russia, and the Netherlands) that influenced different regions of the present United States at the time the New World was being explored, and describe how their influence can be traced to place names, architectural features, and language. 4.5 Students describe the productive resources and market relationships that existed in early America. 1 Describe the economic activities within and among Native American cultures prior to contact with Europeans. 2 Identify how the colonial and early American economy exhibited these characteristics. 3 Identify major leaders and groups responsible for the founding of the original colonies in North America and the reasons for their founding (e.g., Lord Baltimore, Maryland; John Smith, Virginia; Roger Williams, Rhode Island; and John Winthrop, Massachusetts). 5 Contrast these democratic ideals and practices with the presence of enslavement in all colonies and the attempts by Africans in the Virginia, Pennsylvania, and New England colonies to petition for freedom. 6 Outline the religious aspects of the earliest colonies (e.g., Puritanism in Massachusetts, Anglicanism in Virginia, Catholicism in Maryland, and Quakerism in Pennsylvania). 7 Explain various reasons why people came to the colonies, including how both whites from Europe and blacks from Africa came to America as indentured servants who were released at the end of their indentures. 4.8 Students explain the causes of the American Revolution. 1 Explain the effects of transportation and communication on American independence (e.g., long travel time to England fostered local economic independence, and regional identities developed in the colonies through regular communication). 2 Explain how political, religious, and economic ideas and interests brought about the Revolution (e.g., resistance to imperial policy, the Stamp Act, the Townshend Acts, taxes on tea, and Coercive Acts). 4 Identify the people and events associated with the drafting and signing of the Declaration of Independence and the documents significance, including the key political concepts it embodies, the origins of those concepts, and its role in severing ties with Great Britain. 5 Identify the views, lives, and influences of key leaders during this period (e.g., King George III, Patrick Henry, Alexander Hamilton, Thomas Jefferson, George Washington, Benjamin Franklin, and John Adams). 4 Identify the contributions of France, Spain, the Netherlands, and Russia, as well as certain individuals to the outcome of the Revolution (e.g., the Marquis Marie Joseph de Lafayette, Tadeusz Kósciuszko, and Baron Friedrich Wilhelm von Steuben). 5 Describe the significance of land policies developed under the Continental Congress (e.g., sale of western lands and the Northwest Ordinance of 1787) and those policies impact on American Indians land. 6 Explain how the ideals set forth in the Declaration of Independence changed the way people viewed slavery. 7 Describe the different roles women played during the Revolution (e.g., Abigail Adams, Martha Washington, Phillis Wheatley, and Mercy Otis Warren). 4 Understand the meaning of the American creed that calls on citizens to safeguard the liberty of individual Americans within a unified nation, to respect the rule of law, and to preserve the Constitution. 5 List and interpret the songs that express American ideals (e.g., America the Beautiful and The Star-Spangled Banner). 4.11 Students compare and contrast 15th-through-18th-century America and the United States of the 21st century with respect to population, settlement, patterns, resource use, transportation systems, human livelihoods, and economic activity.
EARTH MATTERS: CLIMATE CHANGE IS REAL Climatologists are able to explain why rain falls, clouds form, the patterns of atmospheric circulation, the movement of the ocean currents, and much more. Nonetheless, the transference and acceptance of this knowledge to the majority of society has taken centuries. Even though empirical evidence indicates that global warming has affected and will continue to affect our society, economy, and environment negatively, skeptics have hindered efforts to reduce the human-induced greenhouse gases that facilitate global warming. Naturally-induced Global Warming An examination of the geographic and temporal distribution of proxy indicators such as organisms, atmospheric composition signature, and sediments in Earth’s geological record indicates that changes in the global climate have occurred naturally for centuries as the result of climate carbon dioxide (CO2) emitted into the atmosphere by volcanic activity. On top of that, ocean currents facilitate the release of CO2 into the atmosphere thereby compounding the greenhouse effect that causes global warming. In addition, naturally occurring methane (CH4) and nitrous oxide gas (N2O) also contribute to the greenhouse effect. Sources of CH4 emissions include decaying organic material that is void of oxygen in wetlands, gas hydrates from permafrost, termite mounds, oceans, freshwater bodies, non-wetland soils, and wildfires. Sources of naturally occurring N2O are biologically produced by microbial action in soil and water. As a consequence of the conglomeration of various greenhouse gases, significant global warming in the past has transformed the polar climate of the Canadian High Arctic into a mid-latitude climate zone where boreal forests were able to flourish. Human-induced Global Warming In contrast to the past, recent studies of ice core data, ancient pollen, coral reefs, and sediments indicate that CO2 emissions have risen from around 300 ppm prior to the beginning of the industrial revolution around 1750 to 389 ppm in 2011 due to human activities. Consequently, the last century had the highest average global temperatures within the last 120,000 years.* The primary source of human-induced CO2 emissions involves the combustion of various non-sustainable fuels. According to the International Energy Agency, 43% of the CO2 emissions from fuel combustion were produced from coal, 37% from oil and 20% from gas. Another major source of CO2 emissions is from deforestation, which accounts for 20% of the human-induced greenhouse gases. In the 1990s, 1.5 billion metric tons of carbon (GtC) was released into the atmosphere by tropical deforestation. Current projections indicate that an additional 87 to 130 GtC will be added by 2100. Deforestation also precipitates a reduction in Earth’s carbon sink capacity due to a decrease of carbon “fixed” into plant cells during the process of respiration. Conversely, absorption of high concentrations of CO2 emissions by the ocean causes seawater acidification that reduces its carbon sink capacity and that harms marine species that build structures out of calcium carbonate. The ocean absorbs approximately 50% of the CO2 in the atmosphere. In correlation with increases in atmospheric CO2 emissions, acidification of the ocean has risen by 30% between approximately 1750 and 2008. Exacerbating the problem, humanity currently wastes approximately 1.3 billion tons of food per year during the process of food production, transport, and waste disposal. This is a travesty that equates to about 135 tons of additional greenhouse emissions into the atmosphere, or about 1.5% of the total human-induced greenhouse gases. Furthermore, human-induced CH4 emissions have increased by 17% since 1750 and are increasing at a faster rate than CO2. Approximately 70% of the excess CH4 comes from bacterial action in the intestinal tracts of domesticated livestock, agricultural activity, and controlled vegetation burns. Currently, human-induced CH4 emissions account for 19% of the total greenhouse effect. As a consequence of the increasing amount of human-induced greenhouse gases produced between 1880 and 1980, the observed global surface air temperature has risen more than 0.7°C at an average rate of 0.20° C per decade. If CO2 concentrations in the atmosphere continue to climb and reach 450 ppm, the Earth’s temperature will increase by 2° C. As a result of global warming, the additional heat energy added to the atmospheric system will generate stronger gradients between regions of high and low pressure. In turn, the stronger pressure gradients will cause prolonged droughts that will foster the desertification of the Southwest. Global warming will also increase the concentration of water vapor in the atmosphere that can serve to facilitate the amplification of hurricanes and severe flooding in other regions. Thus, unless we want to feel the wrath of nature and spend billions of tax dollars in reaction to impacts caused by global warming, I suggest we take action to reduce human-induced greenhouse gases. When greenhouse gases are emitted–from anywhere–into the atmosphere they spread everywhere and have a half-life on the order of one whole century. How big is your carbon footprint? (* Italics added by editor for emphasis) Latest posts by Billy Mason (see all) - Earth Matters: Trans-Pacific Partnership & U.S. Food Safety - June 4, 2015 - Earth Matters: Where Climate Change and Geopolitics Collide - June 13, 2014 - Earth Matters: Industrial Age Must End - May 2, 2014
What Do We Mean by Culture? Before going any further, let us spend some time discussing what we mean by culture. When you began reading this chapter what did you think we meant by the word culture? Your answer probably had something to do with people from different countries or of different racial and ethnic backgrounds. You are right—to a point. Culture does include race, nationality, and ethnicity, but goes beyond those identity markers as well. The following are various aspects of our individual identity that we use to create membership with others to form shared cultural identity: race, ethnicity, nationality, gender, sexual orientation, and class. In addition to explaining the above identities, we will also discuss ethnocentrism, privilege, advantage, disadvantage, power, whiteness, co-culture and political correctness as these terms are relevant to understanding the interplay between communication and culture. When we talk about culture we are referring to belief systems, values, and behaviors that support a particular ideology or social arrangement. Culture guides language use, appropriate forms of dress, and views of the world. The concept is broad and encompasses many areas of our lives such as the role of the family, individual, educational systems, employment, and gender. Ethnocentrism One of the first steps to communicating sensitively and productively about cultural identity is to be able to name and recognize one’s identity and the relative privilege that it affords. Similarly important, is a recognition that one’s cultural standpoint is not everyone’s standpoint. Our views of the world, what we consider right and wrong, normal or weird, are largely influenced by our cultural position or standpoint: the intersections of all aspects of our identity. One common mistake that people from all cultures are guilty of is ethnocentrism—placing one’s own culture and the corresponding beliefs, values, and behaviors in the center; in a position where it is seen as normal and right, and evaluating all other cultural systems against it. Ethnocentrism shows up in small and large ways: the WWII Nazi’s elevation of the Aryan race and the corresponding killing of Jews, Gypsies, gays and lesbians, and other non Aryan groups is one of the most horrific ethnocentric acts in history. However, ethnocentrism shows up in small and seemingly unconscious ways as well. If there is a world map hanging on the wall in your classroom look at it. Where is the United States? In the center, of course. When one of your authors was teaching in Beijing, China she noticed that the map in the classroom looked “different” compared to the map with which she was familiar. On closer examination she realized why: China was in the center and the United States was off to the side. Again, “of course,” the United States is not the “center of the world” to the Chinese. Ethnocentrism is likely to show up in Literature classes as well as each culture decides on the “great works” to be read and studied. More often than not these works represent the given culture (i.e., reading French authors in France and Korean authors in Korea). This ethnocentric bias has received some challenge in United States’ schools as teachers make efforts to create a multicultural classroom by incorporating books, short stories, and traditions from nondominant groups. In the field of geography there has been an ongoing debate about the use of a Mercater map versus a Peter’s Projection map. The arguments reveal cultural biases toward the Northern, industrialized nations. Political Correctness Another claim or label that may be used to discount such difficult discussions is Political Correctness, or “PC” as it has been dubbed in the popular press. Opponents of multiculturalism and diversity studies try and dismiss such topics as “that’s just PC.” Luckily, some of the heated debate about PC have quieted in recent years but the history lingers. In short, political correctness refers to... Please join StudyMode to read the full document
Vocabulary for Composition This Vocabulary for Composition lesson plan also includes: - Join to access all included materials Students research a particular website for a list of vocabulary words that deal with the writing of a composition. They define each word on the list. Students relate each word back to how it pertains to the writing of a composition. 3 Views 2 Downloads - Activities & Projects - Graphics & Images - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - Study Guides - Writing Prompts - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - All Resource Types - Show All See similar resources: Vocabulary Unit 6.3: Jobs and Professions What do you call someone who works in politics? Or comedy? Have learners practice the names of jobs and professions with a vocabulary worksheet, which features a word bank to use for fill-in-the-blank activities. The worksheet also... 4th - 8th English Language Arts CCSS: Adaptable Vocabulary and Concept Development Considering a lesson on Greek and Latin roots and affixes? The Latin roots bas and pos, and the Greek root bas are the focus on a colorful, animated presentation that will engage your learners and provide guided and independent practice... 5th - 8th English Language Arts CCSS: Adaptable Daily Academic Vocabulary Grade 4 - Compose, Composition Cool! Here is a week's worth of DLA activities designed for fourth graders. Each day, they focus on a different word. They explore both compose and composition in a series of week-long activities. Each activity has directions for the... 4th - 6th English Language Arts
Two years ago the Climate Commission released its first major report, The Critical Decade: Climate Science Risks and Responses. The report synthesised the most recent climate change science. The phrase, “The Critical Decade”, has become the defining mantra for the Climate Commission: the climate needs significant Australian and global action this decade. Two years on, and one quarter of the way through the decade, we have systematically updated the report. One quarter of the way through the Critical Decade how far have we come? Our understanding of the climate system has continued to strengthen Over the past half-century rapid changes have been observed across the world in many features of the climate system. We have seen changes in heating of the ocean and the air; changing rainfall patterns; decrease in the area of Arctic sea ice; mass loss of polar ice sheets; increasing sea level; and changes in life cycles and distributions of many plants and animals. Importantly the basic principles of the physical science have not changed. For decades there has been a clear and strong global consensus from the scientific community that the climate is changing, human activities are the primary cause and that the consequences for humanity are extremely serious. Scientists are now moving to new challenges, such as improving our understanding of potential abrupt or irreversible changes in major features of the climate system and changing rainfall patterns. The developments over the last two years add a greater richness to our understanding and further reinforce the underlying climate science. Many of the things scientists warned us about are now happening There is now a growing appreciation that climate change is already having significant impacts on human health, agriculture, fresh water supplies, property, infrastructure, and the natural ecosystems upon which our economy and society depends. The consequences of climate change were once a matter for the future, but as the climate shifts we are already witnessing the consequences. The duration and frequency of heatwaves and extremely hot days has increased. The number of record hot days has more than doubled in Australia in the last 50 years. The number of heatwaves is projected to increase significantly into the future. In Australia, heat kills more people than any other type of extreme weather event. Increasing intensity and frequency of extreme heat poses health risks for Australians and can put additional pressure on health services. Changes in temperature and rainfall may also allow mosquito-borne illness like dengue fever to spread south. In many parts of Australia, including southern NSW, Victoria, Tasmania and parts of South Australia, extreme fire weather has increased over the last 30 years. Rainfall patterns are shifting. The southwest corner of Western Australia and much of eastern Australia has become drier since 1970. Dry areas of Australia are likely to become drier into the future, threatening food and water security. For instance, sharply declining rainfall in south-west Western Australia has put pressure on farmers and urban water supplies. It is now clear that the climate system has already shifted, changing conditions for all weather. While extreme weather events have always occurred naturally, the global climate system is hotter and wetter than it was 50 years ago. This has loaded the dice toward more frequent and forceful extreme weather events. Last summer gave Australians a window on the future of the type of weather we can expect to see more frequently. Progress is being made to reduce emissions; far more needs to be done There has been meaningful global progress in the last two years. All major economies, including China and the US, are putting in place policies to drive down emissions and grow renewable energy. It will take some time to see the full impact of these policies. Greenhouse gas concentrations are still increasing at the fastest rate on the recent geological record. The nations of the world, including both sides of Australian politics, have agreed that the consequences of a 2°C rise in global temperature are unacceptably severe. The best chance for staying below the 2°C limit requires global emissions to begin declining as soon as possible and by 2020 at the latest. Emissions need to be reduced to nearly zero by 2050. Stabilising the climate within the 2°C limit remains possible, provided that we intensify our efforts this decade and beyond. Burning fossil fuels is the most significant contributor to climate change. From today until 2050 we can emit no more than 600 billion tonnes of carbon dioxide to have a good chance of staying within the 2°C limit. Based on estimates by the International Energy Agency, emissions from using all the world’s fossil fuel reserves would be around five times this budget. Burning all fossil fuel reserves would lead to unprecedented changes in climate so severe that they will challenge the existence of our society as we know it today. It is clear that most fossil fuels must be left in the ground and cannot be burned. It is the Critical Decade to get on with the job of tackling climate change.
How can you tell if a person is male or female just by their voice? In general, men have deeper voices than women. However, according to a study conducted by Lal Zimman, a doctoral student at The University of Colorado - Boulder at the time of his research, the style of speech can impact perceptions of a person's gender as well, not simply the pitch of his or her voice. In fact, the letter "S" can, on its own, impact people's perception of the speaker's gender. Zimman studied 15 transgender individuals in the San Francisco Bay Area who were in the process of transitioning from female to male. As part of the transition, Zimman's participants received the hormone testosterone in order to lower the pitches of their voices. Zimman recorded the participants, taking care to measure the frequency of the letter "S". In order to determine the effect that the letter had on 10 listeners, he digitally manipulated the frequency of the speaker's voice, sliding the pitch higher and lower. The listeners then had to assess whether the speaker was male or female. Zimman found that the speaker could talk with a higher-pitched voice and still be considered as perceived as male if the person pronounced the letter "S" at a lower frequency, which is achieved by moving the tongue away from the teeth as the letter is pronounced. "A high-frequency 's' has long been stereotypically associated with women's speech, as well as gay men's speech, yet there is no biological correlate to this association," Kira Hall, Zimman's doctoral advisor and an associate professor in linguistics and anthropology at the University of Colorado-Boulder, said in a statement. "The project illustrates the socio-biological complexity of pitch: the designation of a voice as more masculine or more feminine is importantly influenced by other ideologically charged speech traits that are socially, not biologically, driven." Vocal resonance, or whether it appears that the voice comes from the head or the chest, also impacted the perception of gender. It is the result of both practice and biology. For example, people with deeper resonance are born with their larynxes sitting lower in their throats, but children learn to manipulate their placement from early ages. In general, boys learn to push them down when they are young, while girls learn to push them up.
[Note: This tutorial is an excerpt (Section 13.3) of Chapter 13, Exception Handling, from our textbook Java How to Program, 6/e. This tutorial may refer to other chapters or sections of the book that are not included here. Permission Information: Deitel, Harvey M. and Paul J., JAVA HOW TO PROGRAM, ©2005, pp.641-643. Electronically reproduced by permission of Pearson Education, Inc., Upper Saddle River, New Jersey.] 13.3 Divide By Zero Without Exception Handling First we demonstrate what happens when errors arise in an application that does not use exception handling. Figure Fig. 13.1 prompts the user for two integers and passes them to method quotient, which calculates the quotient and returns an int result. In this example, we will see that exceptions are thrown (i.e., the exception occurs) when a method detects a problem and is unable to handle it. 1 // Fig. 13.1: DivideByZeroNoExceptionHandling.java The first of the three sample executions in Fig. Fig. 13.1 shows a successful division. In the second sample execution, the user enters the value 0 as the denominator. Notice that several lines of information are displayed in response to this invalid input. This information is known as the stack trace, which includes the name of the exception (java.lang.ArithmeticException) in a descriptive message that indicates the problem that occurred and the complete method-call stack (i.e., the call chain) at the time the exception occurred. The stack trace includes the path of execution that led to the exception method by method. This information helps in debugging a program. The first line specifies that an ArithmeticException has occurred. The text after the name of the exception, “/ by zero”, indicates that this exception occurred as a result of an attempt to divide by zero. Java does not allow division by zero in integer arithmetic. [Note: Java does allow division by zero with floating-point values. Such a calculation results in the value infinity, which is represented in Java as a floating-point value (but actually displays as the string Infinity).] When division by zero in integer arithmetic occurs, Java throws an ArithmeticException. ArithmeticExceptions can arise from a number of different problems in arithmetic, so the extra data (“/ by zero”) gives us more information about this specific exception. Starting from the last line of the stack trace, we see that the exception was detected in line 22 of method main. Each line of the stack trace contains the class name and method (DivideByZeroNoExceptionHandling.main) followed by the file name and line number (DivideByZeroNoExceptionHandling.java:22). Moving up the stack trace, we see that the exception occurs in line 10, in method quotient. The top row of the call chain indicates the throw point—the initial point at which the exception occurs. The throw point of this exception is in line 10 of method quotient. In the third execution, the user enters the string "hello" as the denominator. Notice again that a stack trace is displayed. This informs us that an InputMismatchException has occurred (package java.util). Our prior examples that read numeric values from the user assumed that the user would input a proper integer value. However, users sometimes make mistakes and input noninteger values. An InputMismatchException occurs when Scanner method nextInt receives a string that does not represent a valid integer. Starting from the end of the stack trace, we see that the exception was detected in line 20 of method main. Moving up the stack trace, we see that the exception occurs in method nextInt. Notice that in place of the file name and line number, we are provided with the text Unknown Source. This means that the JVM does not have access to the source code for where the exception occurred. Notice that in the sample executions of Fig. Fig. 13.1 when exceptions occur and stack traces are displayed, the program also exits. This does not always occur in Java—sometimes a program may continue even though an exception has occurred and a stack trace has been printed. In such cases, the application may produce unexpected results. The next section demonstrates how to handle these exceptions and keep the program running successfully. In Fig. Fig. 13.1 both types of exceptions were detected in method main. In the next example, we will see how to handle these exceptions to enable the program to run to normal completion. Handling ArithmeticExceptions and InputMismatchExceptions
Elon Musk wants to send humans to Mars by 2024, a proposition that seemed quite optimistic when it was announced last year. New research from the University of Nevada Las Vegas makes Musk’s goal that much less attainable. A new predictive model of radiation effects indicates we could be vastly underestimating the damaging effects of cosmic rays on astronauts. That means a trip to Mars could include a doubling of cancer risk compared with previous estimates. Most human spaceflight has taken place in low-Earth orbit, where the planet’s magnetic field offers some protection. Even the Apollo astronauts only spent a few days at greater distances. Deep space exploration comes with greater risks thanks to high-energy radiation known as cosmic rays. The term is something of a misnomer — they aren’t rays, but particles like protons and atomic nuclei moving at extremely high speeds. Their origin is unclear, but scientists suspect they are produced by supernovae and active galactic cores. These particles have sufficient energy to damage your cells when you are exposed, thus the cancer risk. There’s no actual data on human exposure to cosmic rays, simply because no one has been exposed to significant amounts of it. However, simulating exposure with cells in a laboratory shows that cosmic rays could cause cancer, central nervous system dysfunction, circulatory diseases, and more. So, it’s definitely bad for you, but how bad? Conventional risk models assumed that cosmic rays would only be harmful to the cells struck by the high-energy particles. However, the UNLV study suggests the cells adjacent to those cells could also be affected, roughly doubling the risk of developing cancer. The so-called “bystander cells” interact with the damaged cells via signaling pathways, and the signals from cells damaged by cosmic rays can cause mutations in otherwise healthy cells. This is of great concern for any deep space mission, but especially for Mars. Astronauts would likely have to spend several hundred days on the planet before an efficient Earth return launch window came up. Mars has no magnetic field to protect humans from cosmic rays, and current shielding materials can’t stop all of it. The team sees this research as a sign NASA and private space firms need to get more serious about studying methods for mitigating cosmic ray exposure. The risks to humans could be so high that plans for the future have to be put on hold. Certainly SpaceX will need to look at its ambitious 2024 landing again.
Tuberculosis (TB) is a very contagious and spreadable disease in enclosed spaces. TB bacteria is spread through the air by persons talking, coughing, sneezing, or laughing. Breathing in TB bacteria can result in the bacteria growing in the lungs (pulmonaary tuberculosis) or other areas (extra-pulmonary tuberculosis). Tuberculosis can be caused by different strains of mycobacteria, but is usually the result of Mycobacterium tuberculosis. TB multiplies in immune cells that take it up in the lungs and present it to the lymph node cells. tuberculosis most often infects and damages the top part of the lungs, known as the apex. Most infections in people can result in latent infection. One in ten latent tuberculosis infections will eventually result in the active disease- and 50% of active TB case are fatal. TB is much more prevalent in developing countries. Symptoms of TB are: - Cough (lasting 3 or more weeks) - Weight loss - Night sweats - Loss of appetite - Bloody sputum - Chest pain accompanying breathing or coughing People with latent tuberculosis can not spread the bacteria to other people unless the bacteria is active. Latent TB can be just as fatal as active TB because latent tuberculosis is not shown on some tests. The only way for the bacteria to become active is for it to multiply in ones body. It is most common to spread TB to family members, friends, and coworkers. Depending on one’s body people may develop TB disease sooner than others. Some people may develop it within weeks, and others may get sick years after being infected. Antibiotic therapies are used to treat TB. Persons that are at a higher risk of developing TB are: - People with immune deficiencies, the aged and very young - Living in areas with poverty, malnutrition, and overcrowing - Diabetes Mellitus - Needle drug users - Working with patients infected with TB (employees of hospitals, extended care facilities) Misdiagnosing or Failure to diagnose TB not only harms a person’s ability to recover from the infection, but also could potentially lead to the infected person infecting many more people. A person with an active infection can infect 10 to 15 people in the course of a year. Facilities with TB patients need to be especially vigilant to make sure other patients are not infected either through being in too close proximity, or air circulation systems that exchange air with infected particles. If you believe you or a loved one contracted TB from a health care facility or if you had the condition and it was worsened by not being diagnosed, you may be eligible for damages. Contact the Sweeney Law Firm and let our experts review the facts. You may have a medical malpractice case. If we decide to accept your case, there is no fee for representation unless there is a settlement or recovery of fees for you.
In cryptography, a key (or cryptographic key) is a piece of information that allows control over the encryption or decryption process. There are two basic types of cryptographic algorithms. - Symmetric algorithm: If there is just one key for encrypting and decrypting, the algorithm is called symmetric. - Asymmetric algorithm: If there are two different keys, each of which can be used only to encrypt data or only to decrypt it, the algorithm is called asymmetric. If an algorithm is asymmetric, one person publishes a key and accepts messages encrypted with that key. Anyone can encrypt a message, but only the person who owns the other key can decrypt it. This is how online stores, banks, etc., work. Key sizes[change | change source] For symmetric algorithms, a minimum key size of 128 bits is recommended. For applications that need extreme security, such as top secret documents, 256 bits is recommended. Many older ciphers used 40, 56, or 64-bit keys—these have all been cracked by brute force attack because the key was too short. Asymmetric (public key) algorithms need much longer keys to be secure. For RSA, at least 2048 bits is recommended. The largest publicly-known key that has been cracked was a 768-bit key.
"These nanoparticles have many useful properties that are unlike those of bulk silicon, including being a source of stimulated emission," said Munir Nayfeh, a UI professor of physics and a researcher at the university's Beckman Institute for Advanced Science and Technology. Potential uses of the particles, he noted, include single-electron transistors, semiconductor lasers, and markers for biological materials. To create the nanoparticles, Nayfeh and his colleagues begin with a silicon wafer, which they pulverize using a combination of chemistry and electricity. "We use an electrochemical treatment that involves gradually immersing the wafer into an etchant bath while applying an electrical current," Nayfeh explained. "This process erodes the surface layer of the material, leaving behind a delicate network of weakly interconnected nanostructures. The silicon wafer is then removed from the etchant and immersed briefly in an ultrasound bath." Under the ultrasound treatment, the fragile nanostructure network crumbles into individual particles of different size groups, Nayfeh said. The slightly larger, heavier particles precipitate out, while the ultra-small particles remain in suspension, where they can be recovered. Because of their unique characteristics, the nanoparticles could be used in low-power electronics, nonvolatile floating-gate memories, and optical displays and interconnects. "The assembly of ultra-small silicon nanoparticles on device-quality silicon crystals provides a direct method of integrating silicon superlattices into existing or future down-scaled microelectronics architecture," Nayfeh said. "This could lead to the construction of single-electron transistors and electric charge-based memory devices, optimized to work at high temperature." The nanoparticles also could form the basis for novel semiconductor lasers. Nayfeh and his colleagues have demonstrated stimulated, directed emission from within the walls of a microcrystallite reconstructed from the nanoparticles. The emission was dominated by a deep-blue color. "This type of laser could possibly replace the wires used to communicate between components in a circuit," Nayfeh said. "The blue color might also be useful for underwater communications systems." The benign nature of silicon also makes the nanoparticles useful as fluorescent markers for tagging biologically sensitive materials. The light from a single nanoparticle can be readily detected. Nayfeh will describe the new process for making silicon nanoparticles at the March meeting of the American Physical Society. A paper is scheduled to appear in the March issue of Applied Physics Letters. A patent has been applied for. For more information: James E. Kloeppel, Physical Sciences Editor, University of Illinois at Urbana-Champaign. Tel: 217-333-1085. Fax: 217-244-0161. Email: [email protected].
The background to the decline in vulture numbers. Tens of millions of vultures used to be present across the Indian sub-continent. The vast flocks present were due to the very large numbers of livestock reared across South Asia. Government statistics indicate that livestock numbers in India exceeded 400 million since the 1980s and reached more than 500 million in 2005. In India and Nepal cows have a sacred status for Hindus and are not eaten. As a consequence, livestock carcasses became available for vultures in Asia and became the principal food source for the resident species. Vultures were so abundant that the Parsi religion in India and Buddhist communities on the Tibetan plateau utilised these birds for sky burials in order to cleanly and efficiently dispose of human bodies. Vulture declines in India were first quantified at Keoladeo National Park, Rajasthan, by Dr Vibhu Prakash, Principal Scientist of the Bombay Natural History Society (BNHS). Between 1985-1986 and 1996-1997 the population of oriental white-backed vulture declined by an estimated 97% at Keoladeo, and in 2003 this colony was extinct. These declines were coupled with high mortality of all age classes widely recorded. Following the initial survey, in 2000 BNHS teams undertook over 11,000 km of road-based surveys, repeating 6,000 km of road-transects previously surveyed for raptors in the early 1990s, and confirmed that declines of >92% had occurred in all regions across northern India (Prakash et al 2003). Repeat surveys by the BNHS, covering the same route and methodology, were undertaken in 2001, 2003 and 2007 in order to monitor trends in numbers. The survey in 2007 indicated that numbers of oriental white-backed vultures had declined by a staggering 99.9% over the preceding 15 years (Prakash et al 2007). Long-billed and slender- billed vultures decreased by 97% over the same period. Surveys across Nepal and Pakistan indicate vultures have declined at similar rates across the whole of south Asia, and within Pakistan both resident species (white-backed and long-billed) are on the edge of extinction. The continuing rates of population decline were also of great concern, with white-backed vultures in India declining at an average rate of 48% a year for the period from 2001 to 2007 (to 11,000 birds in India). Long-billed (45,000) and slender-billed (1,000) vultures were estimated to be declining at around 22% a year. Populations of red-headed vultures and Egyptian vultures are also declining, at 41% and 35% a year in India (Cuthbert et al 2006). Solving the mystery Understanding the problems facing vultures Research biologists from the Bombay Natural History Society (BNHS), Bird Conservation Nepal (BCN) and the Ornithological Society of Pakistan (OSP) were joined by international partners from the RSPB (UK), Zoological Society of London (UK) and The Peregrine Fund (USA). Because of the rapidity of the decline, simple population modelling established that they had to be caused by a major reduction in adult survival, as reduced breeding success could not account for declines of nearly 50% a year. Through collecting carcasses of dead and dying vultures researchers quickly established that dead birds were often characterised by the presence of extensive visceral gout, and of 284 post-mortems carried out in Pakistan, India and Nepal gout was found in 84% of birds (Oaks et al 2004; Shultz et al 2004). Visceral gout is caused by a build up of uric acid, which at very high levels crystallises in the body coating all internal organs in a white ‘paste’. Uric acid is the white substance found in the guano of all birds, and the characteristic presence of visceral gout in vultures suggested the cause of death was likely to be related to kidney failure. Some birds appeared sick and lethargic for a protracted period before death, with a characteristic drooping head. For several years, researchers battled to understand what might be the cause of the deaths. Dead birds were tested for pesticides, herbicides, toxic heavy metals and other environmental pollutants. While trace levels of some of these compounds were detected, in the majority they were at insufficient levels to cause physiological damage and there was no link between these compounds and the gout found in most dead birds. Because of the geographic range and speed of the declines one initial strong hypothesis was that a novel infectious disease agent was responsible for mortalities. The diclofenac breakthrough was made in 2003 by researchers working for the Ornithological Society of Pakistan and The Peregrine Fund, led by Professor Lindsay Oaks from Washington State University, USA. Lindsay recognised that the class of painkiller known as Non-Steroidal Anti- Inflammatory Drugs (NSAIDs) had been linked to kidney failures and cases of visceral gout when some of these drugs were given to birds. Visiting pharmaceutical shops in Pakistan, the team found that a new NSAID, diclofenac, had recently come on sale and was commonly available. Investigating the carcasses revealed that every bird that had visceral gout also had traces of diclofenac, whereas those carcasses with no gout had no diclofenac. The team then gave diclofenac to vultures, either by injecting birds or by feeding flesh from buffalo and goats injected with diclofenac, and birds that received a high dose of diclofenac died within days of treatment, with extensive visceral gout. In 2004 the results of this work were published in the journal Nature(Oaks et al 2004). Extensive research established the same correlation between gout and diclofenac in birds from India and Nepal (Shultz et al 2004), modelled the amount of diclofenac required in the environment to cause the observed decline rates (Green et al 2004), measured the prevalence of diclofenac in cattle carcasses available for vultures (10% of carcasses, Taggart et al 2007), and modelled the prevalence of diclofenac and determined that diclofenac alone was responsible for the vulture population crash (Green et al 2007). Other hypotheses put forward for the vulture declines include; reduced food availability, increased numbers of dogs, and habitat destruction, but none has significant evidence.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
By Tong Geon Lee Studies in crop plants now routinely use DNA information, thanks to scientists in the 1960s who invented first-generation machines that interpret DNA information. These devices are often called DNA sequencing machines since DNA is arranged in a particular order. The DNA sequencing approach utilizes computer analysis to assemble DNA sequence from short pieces of DNA sequenced in sequencing machines, to align pieces of DNA sequence into references (e.g., known DNA sequence), or to directly deliver DNA sequence information that has been converted into readable formats. The recent advancement of DNA sequencing technologies that interpret DNA very quickly and inexpensively has great promise to make DNA sequence-assisted crop improvement achievable. The cost of current DNA sequencing is one hundred-thousandth of the cost of sequencing 10 years ago, and further advanced technologies are currently driving down sequencing costs and increasing capacity at an unprecedented rate. (To put this in perspective, imagine that the newest harvester is now at a much lower cost than the old John Deere model 4020.) Researchers quickly adopted these technologies, and many horticultural crops stand to benefit from applications of such technologies. TRAIT IDENTIFICATION AND SELECTION How is the application of sequencing technologies now contributing to crop breeding? Once the DNA sequence information is collected, researchers can locate DNA regions (known as genes) associated with a particular phenotypic trait and collect information about potential functions. This process detects genes for important horticultural traits, such as disease resistance and fruit quality. Researchers then develop molecular markers to select for traits in the laboratory. The integration of advances in sequencing technologies with conventional crop breeding practices is beginning to revolutionize current crop improvement. A paradigm of conventional crop breeding has been enormously successful in creating cultivars with improved quality and productivity. Such a paradigm relies on a few molecular markers to select for phenotypes of interest or scoring plants based on their observed phenotypic characteristics to determine their value to breeding programs. This process is often labor- and space-intensive. Recent advances in sequencing technologies such as Illumina next-generation sequencing machines are enabling researchers to quickly and cost effectively sequence the entire DNA information of a target plant (whole-genome) on a high-throughput basis. Such technology, combined with better computational tools, has accelerated the genome-scale identification of genes associated with horticultural traits. Sequencing also influences crop improvement through gene modification using methods such as CRISPR, a contemporary biological tool that can deliver a trait of interest to an organism such as tomato in the laboratory. Conventional breeding is not considered a tool of precision because it essentially involves a reshuffling of the deck of genes that exist between the parents, followed by multiple cycles of selection for genes of interest. In contrast, CRISPR has the proven ability to precisely change the DNA sequence of a gene. This ability is the main advantage of CRISPR over crossing methods of breeding. Yet, a major limitation of the current CRISPR method is that it can be targeted at a specific gene location, which is inevitably based on whole-genome scale sequence information, to prevent unwanted changes to similar or other DNA sequences besides the one of interest. It is difficult to overstate the potential of DNA sequence information for improvement in crop breeding. Whole-genome sequence information of major vegetables (such as tomatoes, cabbage, melons and cucumbers) in the southeastern United States has been available for some time. The availability of such information has already allowed a wide range of powerful methodologies to be applied to breeding. For example, the development of DNA markers to select for traits and identification of resistance genes has allowed great advances with broad impacts on the industry. However, to apply the full power of genome-scale sequence information to any given crop plant, the sequencing of whole genomes from multiple individuals (resequencing) of the same species (e.g., beefsteak and cherry tomatoes) is necessary to know the underlying DNA differences that drive phenotypic diversity. A future is now foreseeable where researchers may resequence their plant materials and perform breeding at a whole-genome level. With the collective efforts of scientists committed to breeding and technology, DNA sequencing technology will be further expanded and applied into crop improvement. Tong Geon Lee is an assistant professor in the Horticultural Sciences Department at the University of Florida Institute of Food and Agricultural Sciences at the Gulf Coast Research and Education Center in Wimauma. To receive future issues of VSCNews magazine, visit VSCNews.com/subscribe/. Share this Post
In physics, a force is any influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction. It is measured with the SI unit of newtons and represented by the symbol F. In other words, a force is that which can cause an object with mass to change its velocity (which includes to begin moving from a state of rest), i.e., to accelerate, or which can cause a flexible object to deform. Force can also be described by intuitive concepts such as a push or pull. A force has both magnitude and direction, making it a vector quantity. The original form of Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional the mass of the object. As a formula, this is expressed as: where the arrows imply a vector quantity possessing both magnitude and direction. Related concepts to force include: thrust, which increases the velocity of an object; drag, which decreases the velocity of an object; and torque which produces changes in rotational speed of an object. Forces which do not act uniformly on all parts of a body will also cause mechanical stresses, a technical term for influences which cause deformation of matter. While mechanical stress can remain embedded in a solid object, gradually deforming it, mechanical stress in a fluid determines changes in its pressure and volume. Read more about Force: Development of The Concept, Pre-Newtonian Concepts, Newtonian Mechanics, Descriptions, Fundamental Models, Non-fundamental Forces, Rotations and Torque, Kinematic Integrals, Potential Energy, Units of Measurement Famous quotes containing the word force: “It isnt that you subordinate your ideas to the force of the facts in autobiography but that you construct a sequence of stories to bind up the facts with a persuasive hypothesis that unravels your historys meaning.” —Philip Roth (b. 1933) “Teach us to live our lives with purpose and with power for visions of a better world and for decisions hour; To choose the way of life, reject the way of death, until the radiant force of God fills mind and strength and breath.” —Walter J. Mathams (19th20th centuries) “In effect, to follow, not to force the public inclination; to give a direction, a form, a technical dress, and a specific sanction, to the general sense of the community, is the true end of legislature.” —Edmund Burke (17291797)
Designing Robot Activity Booklet | 2nd – 3rd Grade This activity booklet was designed to fulfill the requirements for the Brownie Designing Robots or can be used by frontier girl leaders to earn similar badges. Great for teachers and home schooled kids as well. This booklet has interactive activities and design challenges that teach girls how to design robots. These activities include brainstorming how to use different robot parts, learning about bio-mimicry, and sketching their own robot design. - Learning how robots which resemble nature: Match the purposes of robots that use bio-mimicry with the robots themselves. - Using the different parts of a robot: Learn how the different parts of a robot work and brainstorm how to use them. - Turning a simple device into a robot: Use imagination to transform everyday devices into autonomous robots. - Sketching out your robot: Complete a rough draft of their robot, including labeling at least 3 key parts, including sensors. - Revising Your Robot: Practice getting feedback from their peers and incorporate that feedback into version 2.0 of their robot. Do you need more ideas on this topic? Gain even more free ideas to learn more about designing robots, click here. ACCESS BOOKLET INSTANTLY AND PRINT AT HOME Leader Connecting Leaders is not connected with, affiliated with, approved by, endorsed by, or otherwise sponsored by The Girl Scouts of the USA or Frontier Girls.
Many teachers use a combination of white lies and habits to keep students’ writing focused, but many are not actually based on rules of the English language. Here are five of the top grammar myths that people often believe about the language they love. Starting a sentence with a conjunction This is the perfect example of taking a lesson from school and sticking to it. My daughter was reading me a book recently and came to a sentence starting with ‘and’. She gasped and told me that you can’t start a sentence with ‘and’. Teachers give this instruction to stop students writing in fragments, but this is not actually a rule of grammar. It’s much better to avoid a long sentence full of connected independent clauses and start a sentence with a conjunction. Ending a sentence with a preposition The “pseudo-rule” is entirely based on a 17th century quibble between English poet John Dryden and rival poet Ben Jonson, in which Dryden mistakenly transferred a Latin rule to English. In Latin, prepositions are attached to nouns and cannot be separated from them. There are perfectly acceptable instances in English where it is appropriate to end a sentence with a preposition, for example, ‘what are you looking at?’ or ‘it’s you he is thinking of’. This is another pseudo-rule that owes much to falsely equating rules for Latin to the English language. The rule suggests that you cannot split the word ‘to’ from its verb. Present style and usage manuals deem simple split infinitives unobjectionable. For example, Curme’s Grammar of the English Language (1931) says that not only is the split infinitive correct, but it “should be furthered rather than censured, for it makes for clearer expression”. The Oxford Dictionaries do not regard the split infinitive as ungrammatical, but on balance consider it likely to produce a weak style and advise against its use for formal correspondence ‘That’ and ‘which’ are not interchangeable According to the rule, non-restrictive clauses (those set off by commas, dashes, or parentheses) must be introduced by ‘which’ and restrictive clauses (those that are essential to the sentence) must be introduced by ‘that’. While this is generally a good rule to follow, Harvard linguist Steven Pinker said the rule is simply a recent invention, rather than a hard and fast rule. Mixing active and passive voices Many people have been taught to never mix active and passive in the same sentence. This is ridiculous. There are many occasions when it is necessary to mix the two. For example, ‘It is recommended (passive) that the committee vote (active) for the proposal’. What grammar rules are you a stickler for? (Yes, I ended a sentence with a preposition).
MAIN IDEAS Geography The land between the Tigris and Euphrates rivers was a good region for agriculture. Geography The environment of Mesopotamia presented several challenges to the people who lived there. Geography Mesopotamians changed their environment to improve life. TAKING NOTES Reading Skill: Summarizing To summarize is to restate a passage in fewer words. After you read Lesson 1, write a sentence or two summarizing each of the three main sections. Use a chart like this one to record your summaries. Geography of Mesopotamia The rivers of Mesopotamia were important because . . . Mesopotamians watered their crops by . . . Because of a lack of resources, . . . Skillbuilder Handbook, page R3 6.2.1 Locate and describe the major river systems and discuss the physical settings that supported permanent settlement and early civilizations. 82 • Chapter 3 6.2.2 Trace the development of agricultural techniques that permitted the production of economic surplus and the emergence of cities as centers of culture and power. ▲ Ram This figurine shows a ram caught in a thicket. It is made of gold, shell, and a blue stone called lapis. HI 2 Students understand and distinguish cause, effect, sequence, and correlation in historical events, including the long- and short-term causal relations. Build on What You Know Think of a time when you have seen pictures of a flood on television or in newspapers. Floods cause destruction by washing away objects in their path. Do you think a flood can also have good consequences? The Land Between Two Rivers ESSENTIAL QUESTION How did the land between the Tigris and Euphrates rivers support agriculture? The Tigris (TY•grihs) and Euphrates (yoo•FRAY•teez) rivers are in Southwest Asia. They start in the mountains of what are now Turkey and Kurdistan. From there they flow through what is now Iraq southeast to the Persian Gulf. (See the map on pages 78–79.) Mesopotamia The region where these two rivers flow is called Mesopotamia (MEHS•uh•puh•TAY•mee•uh). The name means “land between the rivers.” This land was mostly flat with small, scrubby plants. The rivers provided water and means of travel. In ancient times, it was easier to travel by boat than over land. Boats can carry heavy loads. River currents helped move boats that were traveling down river. Also, few roads existed. Connect to Today Euphrates River Even today, people of Mesopotamia farm the land next to the Euphrates River. The flat land by a river is a floodplain. ▼ Page 3 of 5 Fertile Soil Almost every year, rain and melting snow in the mountains caused the rivers to swell. As the water flowed down the mountains, it picked up soil. When the rivers reached the plains, water overflowed onto the floodplain, the flat land bordering the banks. As the water spread over the floodplain, the soil it carried settled on the land. The fine soil deposited by rivers is called silt. The silt was fertile, which means it was good for growing crops. A Semiarid Climate Usually, less than 10 inches of rain fell in southern Mesopotamia a year. Summers were hot. This type of climate is called semiarid. Although the region was dry, ancient people could still grow crops because of the rivers and the fertile soil. Farming villages were widespread across southern Mesopotamia by 4000 B.C. What made Mesopotamia a good region for farming? Vocabulary Strategy The prefix semimeans “half.” The word arid means “dry.” A semiarid region has some rain, but remains fairly dry. Controlling Water by Irrigation ESSENTIAL QUESTION How did the climate affect farmers? Being a farmer is difficult. Crops need the right amount of water to thrive. The floods and the semiarid climate in Mesopotamia meant that farmers often had either too much water or too little. Ancient Irrigation The model below shows how an ancient irrigation system worked. 1 Gates controlled how much water flowed from the river. 2 Main canals led from the river. They sloped gently downward to keep the water flowing. 3 Medium-sized branch canals led away from the main canals. 4 Small feeder canals led water directly to the fields. GEOGRAPHY SKILLBUILDER INTERPRETING VISUALS Human-Environment Interaction Why do you think it was important to control how much water flowed from the river? 84 • Chapter 3 1 2 3 Page 4 of 5 Floods and Droughts The yearly flood was unpredictable. No one knew when the flood would occur. It might come in April or as late as June. Farmers could not predict when to plant. Also, the flood’s size depended on how much snow melted in the mountains in spring and how much rain fell. If there was too much, the flood might be violent and wash everything away. If there was too little rain and melting snow, the flood would not come. A drought is a period when not enough rain and snow fall. In a semiarid region, drought is a constant danger. During a drought, the river level would drop, making it hard to water crops. If crops failed, people starved. Irrigation By about 6000 B.C., farmers built canals to carry water from the rivers to their fields. Such a system is called irrigation. Often, the silt in the water clogged the canals. Workers had to clean out the silt to keep the water flowing. They also built dams to hold back excess water during floods. How did Mesopotamians water their crops during droughts? Finding Resources ESSENTIAL QUESTION How did Mesopotamians cope with a lack of resources? Since the beginning of time, humans have had to solve problems in the environment. For example, Mesopotamia had no forests to provide wood. The region also lacked stone and minerals, such as metals. Mud Houses and Walls Because of that lack of resources, Mesopotamians had few building materials. Since they could not build with wood or stone, they used mud for bricks and plaster. However, mud buildings crumbled easily and had to be repaired often. Also, Mesopotamia was easy to invade because it had few mountains or other natural barriers. As a result, people from other regions often came to steal from the Mesopotamians or conquer them. The ancient Mesopotamians wanted to protect themselves, but they had no trees or stone to build barriers. So people built mud walls around their villages. Connect to Today ▲ Building of Mud and Reeds This style of building has been used in the region for at least 5,000 years and is still used today. Ancient Mesopotamia • 85 Page 5 of 5 Finding Resources Mesopotamians obtained some stone, wood, and metal outside their own land. They were able to trade for these things because they grew a surplus of grain. Surplus means more than they needed for themselves. Jobs such as digging canals, building walls, and trading had to be done over and over. Community leaders began to organize groups of people to do the work at the right time. Lesson 2 explains more about the organization of society. Why was trade important in Mesopotamia? Lesson Summary • The Tigris and Euphrates rivers made the soil of Mesopotamia good for growing crops. • The people of Mesopotamia developed an irrigation system to bring water to crops. • Mesopotamia had few resources. People traded surplus crops to get what they needed. Why It Matters Now . . . The Mesopotamians had to overcome a lack of resources. Today people still work to solve shortages of water, food, and resources. Terms & Names 1. Explain the importance of Mesopotamia silt floodplain semiarid Homework Helper ClassZone.com Using Your Notes Summarizing Use your completed chart to answer the following question: 2. How did the Mesopotamians change the environment to deal with geographic challenges? (HI 1) Geography of Mesopotamia The rivers of Mesopotamia were important because . . . Mesopotamians watered their crops by . . . Main Ideas 3. What did the Tigris and Euphrates rivers provide for ancient Mesopotamians? (6.2.1) 4. How did the lack of natural resources affect Mesopotamians? (HI 2) 5. How did Mesopotamian farmers obtain the right amount of water for their crops? (6.2.2) Critical Thinking 6. Understanding Causes How was irrigation connected to trade? (6.2.2) 7. Drawing Conclusions How did Mesopotamians create a successful society? (HI 2) Because of a lack of resources, . . . Writing Job Descriptions Create a job description for a worker in Mesopotamia. Some possible jobs include irrigation system planner, canal digger, wall builder, trader, and project scheduler. Form a small group, and share your job descriptions. (Writing 2.2)
Alberta is no stranger to severe weather. Albertans are well acquianted with flood, drought, fire, hail, and snow. This section explores causes, impacts, and mitigation methods for severe weather events. What Is Flooding? An extremely simple definition of flooding is “too much water in a new place” but a more technical description is when water has overflown into an area that is normally dry. In Alberta there exists a potential for flooding along all rivers and streams and there is also potential for flooding from rising groundwater levels or an abundance of stormwater. Learn more about flooding. Flood mitigation has long been an integral part of Alberta’s river management practices. Infrastructure and policy have interrelated to provide the province with measures to respond to and rebuild from flooding events. The June 2013 flooding in southern Alberta, however, set a new precedent in our province and initiated discussions of new mitigation methods capable of responding to intensified flooding and weather events. Learn more about flood mitigation. Understanding Flood Insurance Options In the fictional Town of Creekshore, the Flash River is a major waterway that flows through the Town and provides citizens with drinking water and utilities. Creekshore is located near the headwaters of the Flash River so the Town can be subject to both flood and drought conditions. In times of significant rainfall, such as spring, the Flash River often rises causing localized flooding. Around the world, the problem Mr. Watersedge experienced is not unique, including in Alberta. But how does his experience getting coverage for his flooded property change depending on where he lives? Learn more about flood insurance. The June 2013 flood in Southern Alberta will be remembered by all Albertans as the most damaging flood in our province’s history. The combination of melted snowpack and days of torrential rain resulted in extremely high and bloated rivers in the Southern region of Alberta. Approximately one-hundred thousand people were evacuated, four people killed, and numerous homes and businesses negatively impacted by the flood waters. Emerging from this natural disaster, however, was a greater sense of community and ambition to better prepare for and mitigate the effects of future floods and severe weather events. Learn more about the 2013 flood.
The U.S. Food and Drug Administration has authorized the use of the Johnson & Johnson coronavirus vaccine in adults. Maureen Ferran, a virologist at the Rochester Institute of Technology, explains how this third authorized vaccine works and explores the differences between it and the Moderna and Pfizer–BioNTech vaccines that are already in use. 1. How does the Johnson & Johnson vaccine work? The Johnson & Johnson vaccine is what’s called a viral vector vaccine. To create this vaccine, the Johnson & Johnson team took a harmless adenovirus – the viral vector – and replaced a small piece of its genetic instructions with coronavirus genes for the SARS-CoV-2 spike protein. After this modified adenovirus is injected into someone’s arm, it enters the person’s cells. The cells then read the genetic instructions needed to make the spike protein and the vaccinated cells make and present the spike protein on their own surface. The person’s immune system then notices these foreign proteins and makes antibodies against them that will protect the person if they are ever exposed to SARS-CoV-2 in the future. The adenovirus vector vaccine is safe because the adenovirus can’t replicate in human cells or cause disease, and the SARS-CoV-2 spike protein can’t cause COVID–19 without the rest of the coronavirus. 2. How effective is it? The FDA’s analysis found that, in the U.S., the Johnson & Johnson COVID-19 vaccine was 72% effective at preventing all COVID-19 and 86% effective at preventing severe cases of the disease. While there is still a chance a vaccinated person could get sick, this suggests they would be much less likely to need hospitalization or to die from COVID-19. A similar trial in South Africa, where a new, more contagious variant is dominant, produced similar results. Researchers found the Johnson & Johnson vaccine to be slightly less effective at preventing all illness there – 64% overall – but was still 82% effective at preventing severe disease. The FDA report also indicates that the vaccine protects against other variants from Britain and Brazil too. 3. How is it different from other vaccines? The most basic difference is that the Johnson & Johnson vaccine is an adenovirus vector vaccine, while the Moderna and Pfizer vaccines are both mRNA vaccines. Messenger RNA vaccines use genetic instructions from the coronavirus to tell a person’s cells to make the spike protein, but these don’t use another virus as a vector. There are many practical differences, too. Both of the mRNA-based vaccines require two shots. The Johnson & Johnson vaccine requires only a single dose. This is key when vaccines are in short supply. The Johnson & Johnson vaccine can also be stored at much warmer temperatures than the mRNA vaccines. The mRNA vaccines must be shipped and stored at below–freezing or subzero temperatures and require a complicated cold chain to safely distribute them. The Johnson & Johnson vaccine can be stored for at least three months in a regular refrigerator, making it much easier to use and distribute. As for efficacy, it is difficult to directly compare the Johnson & Johnson vaccine with the mRNA vaccines due to differences in how the clinical trials were designed. While the Moderna and Pfizer vaccines are reported to be approximately 95% effective at preventing illness from COVID–19, the trials were done over the summer and fall of 2020, before newer more contagious variants were circulating widely. The Moderna and Pfizer vaccines might not be as effective against the new variants, and Johnson & Johnson trials were done more recently and take into account the vaccine’s efficacy against these new variants. 4. Should I choose one vaccine over another? Although the overall efficacy of the Moderna and Pfizer vaccines is higher than the Johnson & Johnson vaccine, you should not wait until you have your choice of vaccine – which is likely a long way off anyway. The Johnson & Johnson vaccine is nearly as good as the mRNA-based vaccines at preventing serious disease, and that’s what really matters. The Johnson & Johnson vaccine and other viral-vector vaccines like the one from AstraZeneca are particularly important for the global vaccination effort. From a public health perspective, it’s important to have multiple COVID-19 vaccines, and the Johnson & Johnson vaccine is a very welcome addition to the vaccine arsenal. It doesn’t require a freezer, making it much easier to ship and store. It’s a one-shot vaccine, making logistics much easier compared with organizing two doses per person. As many people as possible need to be vaccinated as quickly as possible to limit the development of new coronavirus variants. Johnson & Johnson is expected to ship out nearly four million doses as soon as the FDA grants emergency use authorization. Having a third authorized vaccine in the U.S. will be a big step towards meeting vaccination demand and stopping this pandemic.
Have paper and pencil and basic shape templates. Work with children in small groups if possible. Ask children to put their names on their papers. The child holds the pencil between fingers and thumb and controls the movement of the pencil with enough control to draw lines and circles, trace simple shapes, and attempt to write letters in her name. Early childhood educators and parents can encourage children to write and draw in all areas of the classroom and during activities at home. Have paper and pencil available for children in the dramatic play area by the phone, in the block area to draw the structures they made, etc. Help adults understand the relationship between writing and drawing and the importance of giving children the encouragement, time, and materials to practice. North Carolina Department of Public Instruction, 2015 ©2015 by the North Carolina Department of Public Instruction. This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/.
NCERT Solutions Class 11 Maths Chapter 2 Exercise 2.3 Relations and Functions NCERT Solutions for Class 11 Maths Chapter 2 Exercise 2.3 Relations and Functions includes questions related to types of functions and their graphs. A relation from a set A to a set B is said to be a function if every element of set A has one and only one image in set B. This exercise involves problems based on the algebra of real functions, including applying different formulas to derive the results. Identity, polynomial, rational, modulus, greatest integer, and constant functions are studied in detail along with their respective graphs. There are multiple examples and sums incorporated in these solutions to deliver the knowledge of basic concepts related to functions. NCERT solutions Class 11 Maths Chapter 2 Exercise 2.3 offers a wide range of problems to cover this topic efficiently. These solutions are proficient in enhancing the problem-solving skills of students. The elaborated format of Class 11 Maths NCERT Solutions the Chapter 2 Exercise 2.3 explains each and every concept accurately to impart explicit knowledge. There are 5 sums in this exercise that are fairly easy to solve if kids understand the theory. Students can download the scrollable PDF solutions by clicking on the links below. ☛ Download NCERT Solutions Class 11 Maths Chapter 2 Exercise 2.3 Exercise 2.3 Class 11 Chapter 2 More Exercises in Class 11 Maths Chapter 2 - NCERT Solutions Class 11 Maths Chapter 2 Ex 2.1 - NCERT Solutions Class 11 Maths Chapter 2 Ex 2.2 - NCERT Solutions Class 11 Maths Chapter 2 Miscellaneous Exercise NCERT Solutions Class 11 Maths Chapter 2 Exercise 2.3 Tips NCERT Solutions for Class 11 Maths Chapter 2 Exercise 2.3 is a valuable guide to deliver deep conceptual knowledge of functions, their types, and formulas. These solutions are apt for students to understand which formula is applied to a particular type of problem. Constantly practicing with these resources will also help in getting proficient in identifying the various functions and their graphs. Students should clear all their doubts about basic terms and forms of functions before solving the questions present in this exercise. It will enable them to advance their knowledge quickly and accurately as well as apply these concepts to challenging problems. The notes provided in the Class 11 Maths NCERT Solutions Chapter 2 will also help students to master this topic in minimal time. Download Cuemath NCERT Solutions PDF for free and start learning!
ASPECTS OF MIGILI MORPHOLOGY TABLE OF CONTENTS Title Page… i Table of Contents vi 1.0 General Introduction 1 1.1. Historical Background 2 Socio – Linguistic Profile 4 Mode of Dressing 7 Genetic Classification 8 Scope of Study 9 Organization of Study 9 Research Methodology 10 Literature Review 13 Basic Phonological Concepts 14 Sound Inventory 15 Migili Vowel Sounds 15 Migili Consonant Sounds 17 Sound Distribution 19 Distribution of Migili Vowel Sounds 19 Distribution of Migili Consonant Sounds 21 Tone Inventory 25 Tone Combination 27 Syllable Inventory 29 Basic Morphological Concepts 31 Types of Morphemes 33 Free Morphemes 34 Bound Morphemes 36 Structural Functions of Morphemes 38 Structural Positions of Morphemes 39 Syntactic Functions of Morphemes 40 Language Typologies 42 Fusion Language… 43 Agglutinating Language… 44 Isolating Language… 44 Types of Morphemes 46 Free Morphemes 46 Bound Morphemes 48 Functions of Morphemes 51 Derivational Function of Morphemes 52 Inflectional Function of Morphemes 52 Morphological Processes 54 Language is the universal fabric that holds every individual of a community together. An instrument, used by man for communication within his environment, without which there would be no meaningful relationship between the human world. Language can also be referred to as the medium through which ideas, thoughts, and other forms of human communication are expressed or carried out. In the metal compartment where all possible, meaningful and acceptable words are formed, there are certain rules that must be followed or certain conditions met before any word can be viewed as acceptable in any language. The branch of linguistics that studies the compatibility of such combinations and proposes the rules for their formation is called MORPHOLOGY. The basic concept of this branch is the morpheme, the smallest meaningful unit in grammar which may constitute a word or part of a word. Every language has its own set of morphological rules which are strictly adhered to by members of its community. Such members , (Native speakers) share a great deal of unconscious knowledge about their language which helps in the acquisition of their first language with little or no formal instructions. In connection to morphology, the Migili language has been duely investigated with a view to finding/revealing the aspects of its morphological set up. The Migili people are a tribal group found in Agyaragu local government, Lafia, Nasarawa State. The first chapter of this research centers on areas such as the historical background of the Migili people, their socio-cultural profile, occupation, religion, festival, mode of dressing, marriage, genetic classification. Several other aspects will be reviewed in the latter chapters of the project work. HISTORICAL BACKGROUND OF MIGILI In an interview with the town chief (ZHE Migili) who is he traditional ruler and an autocrat, two major facts were revealed. One of them is the fact that the name of the language popularly known as Mijili is incorrect rather it is formally known as Migili. The second fact duely noted by him is that the Migili people are not part of the Hausa tribe as they have been mistakenly identified by many. The Migili tribe has a long history which dates back to the old Kwararafa Kingdom in Taraba State. The Kwararafa kingdom comprised of different ethnic groups such as Eggon, Algo, Idoma, and the Gomai. Each tribe took turns in occupying leadership positions of the kingdom and a heir was selected from the royal home of each ethic group. But things changed when it was time for Akuka, a Migili descendant who was next in line to ascend the throne . Akuka was plotted against hence he could not become the next leader. This sparked up a lot of negative reactions from the Migili people as well as some other tribes who viewed such an action as unjust, a way through which they were deprived because of their small population. Together with all members of the tribe, Akuka moved down to a place called Ukari where they settled down for a while and later moved to Agyaragu in Lafia, Nasarawa State where they reside presently. Today, the Migili people are known as settlers in Obi, Agyaragu local government, Lafia, Nasarawa State, Nigeria. They can still be found in other places such as Minna, Abuja, Kubadha in Kaduna, Zuba e.t.c.The major population of about 18,000 people constitute about 96% of Obi Agyaragu local government area. SOCIO – CULTURAL PROFILE The Migili language is rich in both its social and cultural aspects. Some of these aspects are their festival, religion, marriage, occupation e.t.c The Migili people are predominantly farmers. This occupation ranges from young to old, male and female. They produce a lot of crops but their major cash product is yam. Yams are produced for transportation to different parts of the country and they also engage in inter – village sales with their neighbours who do not produce the types of crops that they do. Migili people also grow crop such as melon, beans, guinea corn, rice and millet. There are two major festival celebrated by the Migili. These festivals are very important aspects of their culture as they expose their heritage and ancestral endowments. First is the farming season in which every farmer within the village premises is involved. During this farming season, they move from one indigenes farm to another in large groups cultivating, clearing and planting different types of crops for one another. After this has been done, a date is set to celebrate the harvest of these crops and this leads to the second festival which is the Odu festival. The Odu festival is celebrated village – wide in Miligi. This is a period of harvesting of crops, celebration of the harvest, exchange of pleasantries and entertainment in the village square. During the festival, the Odu masquerade which represents their ancestral values is dressed in a colourful attire with which it displays great dancing steps to the amusement and applause of the villagers. Another festival that is celebrated in the village is the demise of an elderly indigene. This is done with a type of dance called Abeni. Before the arrival of the missionary, the Migili people were ardent traditionalists. They worshipped their ancestors some of which are Odu and Aleku. They had separate seasons at which sacrifices were made and worshipped them with dancing and entertainment. But things gradually began to change after the missionaries arrived thus most of them were converted to Christians, though a small population remain strictly traditional worshippers while some are Muslims. Marriage as an entity was approached from the early stages of childhood amongst the Migili people. Before the Missionary arrived, intercultural marriage was forbidden amongst them with serious consequences or punishment allotted the violation of such law. Marriage between indigenes was formally approached, by the father of the suitor, who informs the mother of the admired girl of his intention. Once an agreement has been reached, the first payment is made to confirm the betrothal of the female child who continues to live with her parents until the due age has been reached. The male child (suitor) then pays his first installment of her dowry and engages in farming activities for his in-laws once every year. But today the order of things have changed and marriage within and outside the tribe is now by choice hence enhancing inter-cultural relationship. MODE OF DRESSING The Migili dressing mode displays their cultural heritage, though their dressing is quite similar to that of the Hausa. Women wear short vests that expose their belly and long skirts that cover their legs, then they adorn their hands, forehead, lips and ankles with beads and bracelets. An interesting feature about their dressing is the plaiting of hair by both male and female indigenes. Though a bit of civilization has been introduced into their culture, hence influencing their dressing, a typical Migili indigene would still appear in colourful beads and bracelets. This is the arrangement of languages into their different categories according to their relationship with other members of their category. Koro Zuba Koro Ija Jijilic Koro-Makamei KORO MIGILI koro Lafiya Adapted from Roger Blench (2006) SCOPE OF STUDY As earlier mentioned, the purpose of the research project is to closely and carefully examine the Migili language and hence, expose its morphological aspects. Investigation would be carried out on the various morphological processes attested by the language. Various steps, theories and methods would be used and considered in the analysis and exemplification of the morphemes and their processes. Also, an accurate compilation of the alphabet in the language has been carried out in order to justify the compilation of the data and its analysis thereof. ORGANISATION OF THE STUDY This long essay has been divided into five different chapters, each containing certain aspect of the research work. Below is a highlight of the chapters and their contents;(i) Chapter one deals with general introduction into the background of the study, the historical background and socio – cultural profile; (ii) Chapter two deals with literature review on the chosen aspect of the research work; (iii) Chapter three deals with the presentation and analysis of data on the chosen work; (iv) Chapter four centers on the processes involved in the branch of study; (v) Chapter five deals with summary of the work done, observation, conclusion and recommendation of references. In the execution of this research, both the informant and introspective methods/approach have been adopted for data collection. Two native speakers have been approached, hence providing the researcher with complete and accurate data from the Migili language. Also, a library/internet research has been adopted, serving as a guide on some primary aspects of the research such as, the geographical location of the language and its speakers, its genetic classification, its population size e.t.c. Below is a brief information about the two informants whose help was sought; (i) Name: Ayuba Osibi Haruna (ii) Age: 40 years old (c) Occupation: Personal assistant to the chairman, local government (iv) Aspect: Data collection (400 wordlist) (a) Name: Dr Ayuba Agwadu Audu (JP) (b) Age: 62 years (c) Position: Village chief (d) Aspect: Historical Background and socio-cultural profile. The Ibadan four hundred (400) wordlist served as the basis for data analysis. In it comprises a list of words in English language for which equivalent meaning has been substituted in Migili language. A frame technique has also been used in order to find out the use of words in into sentence context. A series of sentences have also been translated from English language Migili..
- Students will be able to plan for a personal narrative. - Students will be able to plan individual details of a narrative. - Display the photograph or picture so that all students can see it. - Invite the students to participate in finding as many details as possible in the picture. - Tell the students that they will be learning how to include details in a narrative essay that represents a real personal experience. Explicit Instruction/Teacher modeling(15 minutes) - Read the story Owl MoonOr another trade book of choice. - Demonstrate the process of writing the sequence of events in the story (including the beginning, middle, and end) on a piece of chart paper. - Write down examples of important details in the plot on the piece of chart paper or oversized sticky notes. Guided practise(20 minutes) - Tell the students that they will be working in groups to create part of a story, which will be used to create a class story. - Explain that the topic of the story is the events and experiences of the first day of school. - Divide students into groups and distribute white paper or oversized sticky notes to each group. - Assign each group a block of the day and ask the students to create both a visual and a sequence of events for that assigned block for the first day of school. - After all groups are finished, invite each group to share their part of the writing and post it on the board. - Invite classmates to give feedback on details that can be added to the story. Independent working time(15 minutes) - Tell the students that their task is to now plan a narrative about something that was once difficult for them. This could be something new that they learned (such as learning how to ski) or a challenging time in their life. - Distribute the worksheet Something Difficult and invite students to plan their writing on that graphic organizer. - Circulate around the room and prompt students as needed. - If students need extra practise adding details, give them practise thinking of what details might be relevant to certain feelings. Ask the students to complete the worksheet Elaborating on Feelings. - If students master the planning of a realistic story, teach them to use quotation marks to represent the speech of individuals in the story. - Use Toon Doo or other comic websites to have students illustrate the components of their personal narratives (either before or after writing). - Ask the students to plan a narrative about a time that they were surprised, writing their plan on the worksheet So Surprised! Review and closing(5 minutes) - Pair up students into partners and invite them to share their story ideas with one another, giving each other feedback. - Lead the class in a brief discussion on what makes a great personal narrative.
The Black Hole has remained an area of interest and mystery for astronomers, researchers, and scientists for centuries The first direct visual evidence of a supermassive Black Hole and its shadow was unveiled, showcasing how the imagination and dedication of science around the world, willing to collaborate to achieve a huge goal, can be a model for large-scale success. Marking a revolution in space science, technical advancement, research, astronomy, science and human evolution itself, scientists have managed to obtain the strongest evidence to date about the existence of supermassive Black Holes. The first-ever image of a Black Hole taken through the Event Horizon Telescope (EHT) observations of the center of the galaxy M87 (Messier 87), have opened a new window to the study of Black Holes, their surroundings and gravity. The captured image shows a bright ring formed, as light bends in the intense gravity around a Black Hole which is 6.5 billion times more massive than the Sun. At the heart of the M87 galaxy, 55 million light years from the Earth resides this Black Hole whose image has been captured displaying a halo gas tracing the gigantic outline of the Black Hole. The image allows us to have the first direct glimpse of a Black Hole’s accretion disk i.e. a ring of gas falling into the black hole. In the obtained picture, the crescent that the halo appears to be making is due to light on the side of the disk rotating towards the Earth which is bent toward the earth and hence appears brighter. The dark shadow in the middle shows the edge of the event horizon, also called the ‘point of no return’. Beyond this point, no matter (or even light) can travel fast enough to escape the relentless gravitational pull of the Black Hole. The Black Hole has remained an area of interest and mystery for astronomers, researchers, and scientists for centuries. While there have been simulations and pictures thought to be of a certain way, there was never an actual image available of the Black Hole till date. Observations are also being made around the supermassive Black Hole called Sagittarius A, having a mass equal to about four million Suns, at the center of our galaxy, the Milky Way. This daunting task could only be successful through the imagination and dedication of science around the world, willing to collaborate to achieve a huge goal “Sagittarius A is also a very interesting target. We can see the event horizon and we should be able to resolve it. It is complex. M87 was in some sense the first source we imaged as it was easier to do so because the timescales don’t change much during the course of an evening. We are very excited to work on the Sagittarius star; we are doing that very shortly. We are not promising anything, but we hope to get that very soon,” said Shepherd Doeleman, the EHT Director of the Centre for Astrophysics, Harvard and Smithsonian during a press conference hosted by the National Science Foundation (NSF) which was one of the key funding agencies for the EHT project. OBTAINING THE FIRST BLACK HOLE IMAGE The image is an example that coming together is indeed a success. Capturing an image these days is a second’s work for us with the smartphones in our hands, but it was certainly not so in case of this cosmic giant. It took over 200 people, several institutes over 20 countries and regions, eight ground-based radio telescopes, an amalgamation of observations, theories, technology, and science and over a decade’s time to capture the image of this Black Hole. The EHT gets its name from the Event Horizon of a Black Hole which is the gravitational boundary beyond which neither light nor matter can escape. To capture an image of the Black Hole what was needed was an Earth-sized telescope which was practically impossible, but what was made possible was a virtual telescope of the size of the Earth. EHT is a planetscale array of eight ground-based radio telescopes deployed at a variety of challenging high-altitude sites, forged through international collaboration, designed to capture images of a Black Hole. The EHT links telescopes around the globe to form an Earth-sized virtual telescope with unprecedented sensitivity and resolution. The locations included volcanoes in Hawai and Mexico, mountains in Arizona and the Spanish Sierra Nevada, the Chilean Atacama Desert and Antarctica. The breakthrough was announced on April 10, 2019, in a series of six papers published in a special issue of The Astrophysical Journal Letters. Multiple calibrations and imaging methods have revealed a ring-like structure with a dark central region — the shadow of the Black Hole — that persisted over multiple independent EHT observations. “We use a technique that has very long baseline interferometry. Radio waves from the Black Hole hit radio telescopes, where they are recorded with the position of atomic clocks, with only one second every 10 million years. When you register these radio waves so precisely, you can store them on hard drives or send them to a central facility where they can be combined precisely. It is exactly the same way that a mirror uses an optical telescope reflected by perfect synchronicity to a single focus. When we do this, we can synthesise a telescope that has the resolving power, as though we had one the size of the distance between these telescopes, truly turning the Earth into a virtual telescope. However, even this broad global network is not enough by itself to make an image. The key is that the Earth turns. In April 2017, all the dishes swiveled, turned and stared at M87, the galaxy 55 million light-years away,” Doeleman added. This daunting task could only be successful through the imagination and dedication of science around the world, willing to collaborate to achieve a huge goal. “No single telescope on the Earth has the sharpness to create an un-blurred definitive image of the Black Hole. This team did what all good researchers do, they innovated. This was a huge task, one that involved overcoming numerous technical difficulties. It was an endeavor so remarkable that NSF has invested $28 million in more than a decade, joined by many other organisations in our support, as these researchers shaped their idea into reality. The event horizon project shows the power of collaboration, convergence and shared resources, allowing us to tackle the universe’s biggest mysteries,” said France Cordova, Director, and NSF, who believes the image, will demonstrate an imprint on people’s memories. The challenges in the way were not just at technical and scientific levels, but also at the cosmic level which were pretty out of hand. “Also, there were some very interesting cosmic coincidences needed. Take for example the hot gas swirling around the Black Hole. Photon has to leave from the horizon, travel through the hot gas to the Black Hole and the light rays of a millimeter length, then that has to propagate 60,000 years through the galaxy and another 55 million years to intergalactic space. Then it winds up in the Earth’s atmosphere where there is its greatest enemy. The greatest danger is that it will be absorbed by water vapour in our own atmosphere. The telescope allows us to see what has traveled to us so far. It just takes light 55 million years to get here, so when we see M87 in this image you saw, that is what it looked like 55 million years ago,” said Doeleman. A light year is a measure of the total distance that a beam of light, moving in a straight line takes to travel in one year. “Getting the site to work isn’t the end of the process, we also had to test them all because you really only get one shot. So, we spent years taking site by site, pairing them up and making sure that the observations would work. The last of these observations was in January 2017. By March 2017, we knew that it worked and we were ready to go. The image shown is from April 2017. But even with all of that in place, we still had to wait for the weather. We have to have good weather in Hawaii and Spain at the same time and Arizona in the South Pole. In 2017, we were very lucky. At the end of that, more than half a tonne of hard drives were recorded. It is equivalent to the entire selfie collection over a lifetime for 40,000 people. The image you saw isn’t that in size, it is a few hundred kilobytes, so our data analysis has to collapse this data into an image that is more than one billion times smaller,” said Dan Marrone, the Associate Professor of Astronomy at the University of Arizona. During the centennial year of the historic an experiment that first confirmed Einstein’s general relativity, this EHT’s accomplishment has allowed scientists a new way to study the most extreme objects in the Universe predicted by this theory. “M87 was catching at a quiet point, which we can tell from historical multi-wavelength, I think we just got lucky, had it been flaring, we might have seen something that would have blocked the Hole as well.” —Sera Markoff, Professor of Theoretical Astrophysics, University of Amsterdam “There are a lot of clichés that get thrown around when talking about big scientific discoveries. Words like “breakthrough” or “game-changing” are often used. They grab people’s attention, but it’s fairly rare that they apply. Today’s announcement of the first image ever taken of a Black Hole, more precisely, of its shadow, truly rises up to that standard,” —NASA’s ChandraXRay Observatory wrote about the Black Hole image capture A century ago, astronomers proved that light bends around the Sun as Einstein predicted and around the same time this year, Einstein’s general theory of relativity has found evidence. “Einstein’s equation about his description of gravity, is fundamentally one of the most beautiful and serious theories we have even when history abounds around Black Holes,” Avery Broderick, Associate Faculty, Perimeter Institute & the University of Waterloo added. Known for its incredibly graceful, beautiful and accurate description of how the cosmos works, the general theory of relativity predicts that light coming from a strong gravitational field should have its wavelength shifted to larger values, what astronomers call a “red shift”. “This is the strongest evidence that we have to date for the existence of Black Holes. It is also consistent with the precision of our measurements with Einstein’s predictions. This image forces a clear link between Supermassive Black Holes and the engines of bright galaxies. We now know clearly that Black Holes drive large structures in the universe from their home in the galaxy. We now have this highly new way of studying Black Holes that we have never had before. We have discovered and this is just the beginning.” Doeleman added. “The Black Holes might be the most complex objects, but they have a lot of consequences of their own”, said Sera Markoff, Professor of Theoretical Astrophysics, and the University of Amsterdam. “The general relativity itself does not change when we look at different Black Hole masses, but it turns out that the impact of a Black Hole will actually change a lot. So we want to understand the role of Black Holes in the universe, then we need to have accurate determinations of the Black Hole masses,” she added. While to us the image might appear fuzzy, it isn’t. Broderick explains,” We have spent considerable time trying to ascertain the particular details of the ring-like feature and the sharpness falls off less than ten percent of the radius. It is about the instrumental resolution that we practically have.” However, Doeleman said, “We think we can make the image perhaps sharper through algorithms, but we are embarking on a wonderful new series of putting new telescopes from places on the Earth, so if you add more telescopes, you build out that virtual mirror. Even adding two or three more stations in just the right places, will increase the fidelity of the image a lot.” Asymmetries around the ring, the brightness in the Southern part with a lot of future work on this to sharpen our focus on gravity, are some of the interesting things that the scientists hope to explore, having captured this image. WHAT IS A BLACK HOLE? Mysterious, extraordinary, ethereal, fascinating, formidable, bizarre sinkholes are some of the adjectives that have become synonymous with Black Holes over centuries as these enormous cosmic objects have continued to be a centre of amazement, curiosity and questions for scientists, astrophysicists, astronauts, students and people alike. Black Holes are known to have enormous mass, but the very compact size and their presence affects their environment in extreme manners, warping spacetime and super-heating any surrounding material. “Black Holes are the most mysterious objects in the universe, they are cloaked by an event horizon that prevents even light from escaping and yet the matter that falls onto the event horizon is superheated so before it passes through, it shines very brightly. The gas that is superheated, lights up a ring where photons orbit the Black Hole and the interior of that is a dark patch that prevents light from escaping,” said Sheperd Doeleman. NASA defines Black Hole as a place in space where gravity pulls so much that even light cannot get out. The gravity is so strong because the matter has been squeezed into a tiny space. SIZE OF A BLACK HOLE Black Holes can be big or small. Scientists think the smallest black holes are as small as just one atom. These black holes are very tiny but have the mass (the amount of matter) of a large mountain. Another kind of black hole is called “stellar.” Its mass can be up to 20 times more than the mass of the Sun. There may be many stellar-mass Black Holes in Earth’s galaxy, called the Milky Way. The largest Black Holes are called “super massive” which have masses that are more than one million Suns together. Scientists have found proof that every large galaxy contains a super massive Black Hole at its centre. FORMATION OF A BLACK HOLE Scientists believe the smallest Black Holes were formed when the universe began, while the stellar ones are made when the centre of a very big star falls upon itself or collapse, causing a supernova which is an exploding star that blasts part of the star into space. The supermassive Black Holes were made at the same time as the galaxy they are in, think the scientists. An interesting fact about the Black Hole remains that it cannot be seen owing to the strong gravity that pulls all of the light into the middle of the Black Hole. However, scientists can see how strong gravity affects the stars and gas around the Black Hole. Scientists can study stars to find out if they are flying around, or orbiting a Black Hole. When a Black Hole and a star are close together, high-energy light is made. This kind of light cannot be seen with the human eye, but satellites and telescopes are used in space to see the high-energy light. Black Holes continue to intrigue scientists to help them navigate better through the universe and understand the universe more closely. COULD A BLACK HOLE DESTROY EARTH? This has been a question of bewilderment for the masses for as long as one can remember and NASA has well addressed this question. ‘Black Holes do not go around in space devouring stars, moons, and planets. The Earth will not fall into a Black Hole because no one is close enough to the solar system for the Earth to do that. Even if a Black Hole with the same mass as the Sun and were to take the place of the Sun, the Earth would still not fall into it. The Black Hole would have the same gravity as the Sun.
When it comes to contemplating the state of our universe, the question likely most prevalent on people’s minds is, “Is anyone else like us out there?” The famous Drake Equation, even when worked out with fairly moderate numbers, seemingly suggests the probable amount of intelligent, communicating civilizations could be quite numerous. But a new paper published by a scientist from the University of East Anglia suggests the odds of finding new life on other Earth-like planets are low, given the time it has taken for beings such as humans to evolve combined with the remaining life span of Earth. Professor Andrew Watson says that structurally complex and intelligent life evolved relatively late on Earth, and in looking at the probability of the difficult and critical evolutionary steps that occurred in relation to the life span of Earth, provides an improved mathematical model for the evolution of intelligent life. According to Watson, a limit to evolution is the habitability of Earth, and any other Earth-like planets, which will end as the sun brightens. Solar models predict that the brightness of the sun is increasing, while temperature models suggest that because of this the future life span of Earth will be “only” about another billion years, a short time compared to the four billion years since life first appeared on the planet. “The Earth’s biosphere is now in its old age and this has implications for our understanding of the likelihood of complex life and intelligence arising on any given planet,” said Watson. Some scientists believe the extreme age of the universe and its vast number of stars suggests that if the Earth is typical, extraterrestrial life should be common. Watson, however, believes the age of the universe is working against the odds. “At present, Earth is the only example we have of a planet with life,” he said. “If we learned the planet would be habitable for a set period and that we had evolved early in this period, then even with a sample of one, we’d suspect that evolution from simple to complex and intelligent life was quite likely to occur. By contrast, we now believe that we evolved late in the habitable period, and this suggests that our evolution is rather unlikely. In fact, the timing of events is consistent with it being very rare indeed.” Watson, it seems, takes the Fermi Paradox to heart in his considerations. The Fermi Paradox is the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations and the lack of evidence for, or contact with, such civilizations. Watson suggests the number of evolutionary steps needed to create intelligent life, in the case of humans, is four. These include the emergence of single-celled bacteria, complex cells, specialized cells allowing complex life forms, and intelligent life with an established language. “Complex life is separated from the simplest life forms by several very unlikely steps and therefore will be much less common. Intelligence is one step further, so it is much less common still,” said Prof Watson. Watson’s model suggests an upper limit for the probability of each step occurring is 10 per cent or less, so the chances of intelligent life emerging is low — less than 0.01 per cent over four billion years. Each step is independent of the other and can only take place after the previous steps in the sequence have occurred. They tend to be evenly spaced through Earth’s history and this is consistent with some of the major transitions identified in the evolution of life on Earth. Here is more about the Drake Equation. Here is more information about the Fermi Paradox. Original News Source: University of East Anglia Press Release
As its name implies, fiberglass consists of fine-spun filaments of glass made into a yarn that is then woven into a rigid sheet, or some more pliant textile. Parisian craftsman Dubus-Bonnel was granted a patent for spinning and weaving glass in 1836, and his process was complex and uncomfortable to execute. It involved working in a hot, humid room, so the slender glass threads would not lose their malleability. And the weaving was performed with painstaking care on a jacquard-type fabric loom. So many contemporaries doubted that glass could be woven like cloth that when Dubus-Bonnel submitted his patent application, he included a small square sample of fiberglass. From: Extraordinary Origins of Everyday things by: Charles Panati
What is 2 Mod 40? (2 % 40) In this article we're going to look at what 2 mod 40 means and how we can calculate that. This type of divisibility operation is called mod, modulo or modulus and you can often see the symbol % used as well (i.e. 2 % 40) Modulo is the mathematical operation which lets you find the remainder when dividing two numbers like 2 and 40 together. When are say "what is 2 % 40?" we are asking "when I divide 2 by 40, what is the remainder?" In math, we use modulo frequently. It can be used to check if a number is even or odd, clocks use it to tell the time, and you can use it to count something for a certain number of times as well. Before we begin, let's cover the terms you need to know: - The first number, 2, is called the dividend - The second number, 40, is called the divisor - When you divide 2 by 40, the answer is called the quotient - The quotient is made up of the whole number part (called the whole) and the decimal places part (called the fractional) To calculate 2 mod 40 we first need to divide 2 by 40 to get the quotient: Now we have the quotient, we take the whole part of it (0) and multiply it by the divisor (40): We then take the result of that calculation and subtract it from the dividend to get the answer: 2 - 0 = 2 Therefore, the final answer is: 2 mod 40 = 2 We could also calculate this using a different method. To use this method, list out all of the multiples of the divisor (40) and find the one that is equal to or less than the dividend (2). The multiples of 40 are: 0, 40, 80, 120, and so on. As we look at those multiples of 40 we can see that the highest number that is less than or equal to the dividend (2) is 0. Once we have that number, we subtract 0 from the dividend to get our answer: 2 - 0 = 2 Therefore, the solution using this method is the same as the previous modulo method: 2 mod 40 = 2 If you enjoyed this article I challenge you now to calculate some problems yourself or use the list below to read up on how to calculate this with different numbers and try to work out it out for yourself. Link To or Reference This Page If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support! "What is 2 Mod 40? (2 % 40)". DivisibleBot.com. Accessed on July 30, 2021. https://divisiblebot.com/modulo/2-40/. "What is 2 Mod 40? (2 % 40)". DivisibleBot.com, https://divisiblebot.com/modulo/2-40/. Accessed 30 July, 2021. What is 2 Mod 40? (2 % 40). DivisibleBot.com. Retrieved from https://divisiblebot.com/modulo/2-40/. Calculate Another Problem Here are some more random calculations for you: - What is 26 Mod 71? - What is 24 Mod 60? - What is 37 Mod 35? - What is 58 Mod 90? - What is 85 Mod 7? - What is 14 Mod 44? - What is 23 Mod 66? - What is 89 Mod 66? - What is 67 Mod 87? - What is 87 Mod 38? - What is 44 Mod 57? - What is 39 Mod 67? - What is 50 Mod 6? - What is 14 Mod 6? - What is 64 Mod 79? - What is 2 Mod 75? - What is 74 Mod 12? - What is 76 Mod 91? - What is 30 Mod 78? - What is 13 Mod 99? - What is 34 Mod 49? - What is 93 Mod 10? - What is 50 Mod 74? - What is 75 Mod 72? - What is 79 Mod 7? - What is 16 Mod 69? - What is 97 Mod 21? - What is 77 Mod 53? - What is 22 Mod 38? - What is 73 Mod 64?
The reconstruction era president johnson versus President johnson proved to be an obstacle to the most dramatic events that occurred during the reconstruction era in the the battle over reconstruction. Racism and reconstruction the reconstruction era commenced into and what role has it played in past generations versus today’s generations and how. Under the administration of president andrew johnson in 1865 and 1866 marking the end of federal support for reconstruction-era state governments in the south. Presidential reconstruction in 1865 president andrew johnson implemented a plan of reconstruction that gave the white south a free hand in. Mark bao thurs 30 oct 2008 reconstruction notes reconstruction: thousands” congress versus president – johnson vetoed both the 1866 reconstruction era. The debate regarding the reconstruction of the union began well before the civil war butler might have achieved the conviction of president johnson. The reconstruction plans of lincoln and johnson who believed it was vice-president during lincoln's second term by congress during the reconstruction era. Federal authority versus the rights of individual states the reconstruction era fostered political heated power struggle between the president and congress. A diagram on: lincoln, johnson and the radical republican's reconstruction plans lincolns plan: lincolns reconstruction plan was to offer the south and all former. Overview of the reconstruction era in american history from plessy versus ferguson while reconstruction began vice president for lincoln, johnson. Guided reading & analysis reconstruction guided reading & analysis: reconstruction, 1863-1877 president andrew johnson making it a fair. President johnson issued no statement upon the death of his enemy a study of andrew johnson and reconstruction (1930) belz thaddeus stevens (1899. Effects of the reconstruction era essays and the reconstruction era: president johnson versus the radical. Presidential reconstruction white house andrew johnson, the 17th president of the united states help support us with civil war era merchandise. What was presidential reconstruction but when he succeeded lincoln as president, johnson took a much softer line. Here the quarled with president johnson (1864-69) president grant was more supportive lost cause historians painted a dark picture of the reconstruction era. Reconstruction, congress vs johnson congressional reconstruction, presidential reconstruction, black suffrage, reconstruction act, president andrew johnson. The president versus congress though congress and johnson agreed that slavery should be abolished and that the power the reconstruction era. Stanton was a staunch supporter of the congress and did not agree with president johnson’s reconstruction topics/the-end-of-reconstruction/ versus. Suggested essay topics and study questions for history sparknotes's reconstruction in civil war–era southern society and were of president johnson. Start studying apush reconstruction the reconstruction era represented the to impeach johnson in early 1868 (first president of the. Explain how the divisions between president johnson and the congress eventually led to his impeachment the battle over reconstruction: impeachment timeline. “andrew johnson’s reconstruction and president johnson announced his felt toward blacks during the reconstruction era. The term reconstruction era, in the context of the history of the united states, has two senses: the first covers the complete history of the entire. View lincoln versus johnson reconstruction approaches reconstruction era of the united states congress vs president reconstructiondocx. View 13 reconstruction and 14th amendment 2017 from johnson versus congress 1865-67-president johnson and 14th amendment 2017 - reconstruction. Chapter 16: reconstruction lincoln's ten percent plan versus the wade • president johnson’s reconstruction plan offered amnesty and the restoration of. The progressive era to the charles sumner on reconstruction and the and democrats led to an all-out political war between president andrew johnson and.
Antelopes are even-toed ungulates in the family Bovidae. Although the term ‘antelope’ is familiar, there is in fact no clear definition of an antelope and a few species conventionally regarded as antelopes are taxonomically closer to wild cattle. Antelopes are herbivores. The males always have a pair of horns (one species has two pairs), females in some species bear horns, others do not. The horns consist of a bony core and an outer sheath of keratin. The horns are permanent structures and are not shed every year (unlike deer antlers). Size ranges from the tiny Royal Antelope at 2.5-3 kilos to the Eland and Giant Eland, whose males can exceed 1000 kg. Antelopes are valued for their horns, skins, and meat: in West and Central Africa, antelopes are a major source of bushmeat, providing millions of local people with their main source of protein. Antelopes also have aesthetic, cultural, and spiritual values. ASG currently recognizes 93 antelope species. Its remit also covers five non-antelope species, for practical reasons. New insights, particularly from genetic and genomic research, mean that taxonomy is kept under constant review (see the ASG Taxonomy Policy). A large number of antelope ‘subspecies’ have been named, mainly on the basis of differences in coat colour or horn shape. Very few subspecies have been verified by genetic data, and the majority are perhaps best regarded as geographical variants. ASG recognises a small number of subspecies and a thorough taxonomic review is currently being planned. In all countries of mainland Africa plus some islands such as Zanzibar and Bioko, but not Madagascar. Antelopes are also found across the Arabian Peninsula, Middle East, the south Caucasus, through Central Asia to China, Mongolia, and South Asia. Three species are endemic to India, Bangladesh, Nepal, and Pakistan. The range of the saiga antelope even extends into the Caspian steppes of south-eastern Europe and the pronghorn occurs in western North America from Alberta, Canada, through the USA to northern Mexico. A wide range of habitat types – hyperarid desert, semidesert, steppe, savannah, montane grassland, swamps and wetlands, light woodland, tropical rainforest. The Tibetan gazelle and Tibetan antelope are endemic to the cold high-altitude Qinghai-Tibet Plateau where they occur up to 5000 metres elevation. As with other large mammals, most antelopes have suffered substantial declines in both range and population size over the last 150 years and especially the last 75 years. Three antelope species have become extinct: Bluebuck, Saudi Gazelle, and Yemen gazelle (though doubts exist over whether the latter was a valid species). Some species are now perilously close to extinction, such as Addax (less than 100 remaining in the wild) and Dama Gazelle. At the other end of the scale, a few species still have population sizes over 1 million – Impala, Blue Duiker, Mongolian Gazelle, Saiga. There are some conservation success stories to report, too: the reintroduction into the wild of Arabian Oryx and Scimitar-horned Oryx, the significant increase in Tibetan Antelope numbers following stringent protection, and the spectacular increase in the Saiga population from around 40,000 in 2005 to more than 1.3 million in 2022. 22 species, distributed across deserts and savannahs of Africa, the Arabian Peninsula, Middle East, and Central Asia. This group includes familiar species such as the Impala and Thomson’s Gazelle in East Africa, and the elegant Blackbuck in India. Formerly widespread in the hills and deserts around the perimeter of the Arabian Peninsula. It is still found in at a few sites in the UAE, Oman, Saudi Arabia, and Yemen, mainly protected areas. It has been reintroduced in a few places. (Queen of Sheba’s Gazelle) The only evidence of this gazelle consists of a few specimens obtained in the hills near Ta’izz, Yemen, in the 1950s. There have been no further reports. Whether this is really a valid species seems doubtful and it may have been a distinctively marked variety of Arabian gazelle. Sometimes also known as the Atlas Gazelle. It occurs along the Atlas mountains of Morocco, Algeria, and Tunisia in Mediterranean forests, scrub, grasslands, and arid hill slopes. Distributed widely across the whole of the Sahara and northern Sahel, from the Atlantic coast to the Red Sea, extending southeast through Djibouti to northern Somalia and into Sinai, Israel, and Jordan in the north-east. Dorcas Gazelle is adaptable and resilient and is the only species of antelope in the region that survives in good numbers. Until recently this was considered conspecific with Arabian Gazelle, but the two species have been separated on genetic evidence. Found in Mediterranean hills and scrub of Israel, south-eastern Turkey, and adjoining parts of Syria and Jordan. Gazella gazella leptoceros Occurs in the northern part of the Sahara. The eastern population in Egypt and the border area with Libya has been heavily hunted and survives in very small numbers, or may be extinct. In the west, there are two populations, one in the Great Western Erg (sand sea) of Algeria and another the Great Eastern Erg along the Algeria/Tunisia border. Numbers are believed to be very low. Distributed in sandy areas of the Arabian Peninsula. Much of its original range has been lost due to overhunting, but it still occurs in several protected areas, and it has been reintroduced to others. Large numbers are managed in government and private collections. Formerly occurred in gravel and acacia plains of the Arabian Peninsula. Most records came from western Saudi Arabia with a few from Kuwait and Yemen. The species has not been observed in the wild for several decades and exhaustive genetic analyses have shown that no animals are held in captivity. It is now regarded as Extinct. Restricted to Somalia where it lives along the east coast, in Puntland in the north-east, and across Somaliland in the north. It prefers open plains (locally known as bans) and open woodland and avoids dense acacia scrub. Very widely distributed, in northern Iraq, the Caucasus, Iran, Afghanistan and Pakistan, Central Asia, China, and Mongolia. In many countries it is known as the Black-tailed Gazelle. It has recently been reintroduced to Georgia and some former sites in northern Azerbaijan. The largest population is found in Mongolia. familar species of the Serengeti, Maasai-Mara, and other East African plains. Its current population size is estimaed at around 200,00 and it undertakes mass seasonal migrations Occupies Sahelian grasslands and open bush from Senegal to Sudan. It generally lives in small and scattered populations and its status is not very well known. Occupies a small range on to the east of the Blue Nile in Sudan, Eritrea, and Ethiopia. Currently only a few small populations are known, in the Gash-Setit area of Eritrea, Kafta Sheraro N.P. in Ethiopia, and Dinder N.P. in Sudan. The ‘Red Gazelle’ has always been an enigmatic species. The only evidence for its existence consists of three male skins and/or skulls purchased in local markets in northern Algeria in the late 1800s. The precise origin of the specimens is unknown. There are no other records, no live animals have been seen, nor has any local name been reported. The three specimens were obtained in markets at the northern end of trans-Saharan trade routes and the most likely explanation is that these skins originated from somewhere on the southern side of the Sahara. At least one specimen examined shows a close resemblance to E. rufifrons. It is no longer regarded as a valid species in the Mammals of Africa and it is not listed on the American Society of Mammalogists’ database. ASG agrees that this is not a valid species and it is no longer assessed for the IUCN Red List. The largest and one of the most handsome gazelles. Dama gazelle has lost more than 98% of its former range in the Sahel and margins of the Sahara and now there are only and estimated 100-150 remaining in three populations, all of which are isolated from each other. The species is at very high risk of extinction in the wild. Three subspecies have been described, based on coat colour, but these are not supported by the latest genetic evidence. Occurs in East Africa, from central Tanzania north through Kenya, north-east Uganda, and Ethiopia north to the Awash Valley. Three subspecies are usually recognised, but details of the boundaries between them require more detailed clarification. Peter’s Gazelle N. g. petersi of the Kenyan coastal zone is declining. Endemic to the Horn of Africa, formerly distributed from North-east Africa from south-east Sudan, through Eritrea, Ethiopia, Djibouti, and Somalia. It has lost much of its historic range. The largest population is found on the Dahlak Islands of Eritrea. It is also still present in Djibouti, Buri Peninsula (Eritrea), the Awash and Alledeghi national parks (Ethiopia) and Somaliland Occurs in North-east and East Africa in Ethiopia, Djibouti, Somalia, Kenya, and northern Tanzania. It is closely associated with arid, bush, scrub, and acacia woodland. It has a long neck (it is sometimes known as the giraffe-antelope) and often stands on its hind legs to feed, allowing it to access vegetation out of the reach of other species This species is endemic to a small part of the Horn of Africa. It occurs in thick bush and thorn scrub in Somalia and the Ogaden region of south-east Ethiopia. It is very shy and poorly known. The small amount of information from the field indicates that it lives at much lower densities than the gerenuk. Some taxonomists place it in its own tribe, Ammodorcadini. An elegant antelope, still found in large numbers across eastern and southern Africa. Impala prefer light woodland and bush, clearings, and grassland margins. Black-faced Impala A. melampus petersi has a is a broad black mark on the muzzle and is found only in Namibia, including Etosha N.P. Endemic to South Asia. It is widespread in India, extending into a few parts of the terai zone of Nepal, but now extinct in Bangladesh and Pakistan. There are introduced populations in Argentina and the USA. The handsome males bear distinctive spiral horns. The stronghold of this species is in the Daurian Steppes of eastern Mongolia and adjoining areas of China and Russia, though its range extends westwards into central and western Mongolia. It is nomadic and living in large herds. The population numbers over 1 million. Its former range in western China has now been reduced to a few fragmented populations in the vicinity of Qinghai Lake, at the north-east corner of the Qinghai-Tibet Plateau. Numbers appear to have stabilised thanks to positive conservation action. Occurs across the whole Qinghai-Tibet Plateau, up to elevations of 4700 metres. 7 species, one in the Arabian Peninsula, 6 in Africa. A highly desert-adapted species that formerly ranged across the Sahara from the Atlantic to the Nile. It has lost over 99% of this range, numbers are likely now less than 100 and it is on the very edge of extinction in the wild. Once occurred in subdesert and Sahelian steppes on southern and northern sides of the Sahara. It became extinct in the wild before 2000, mainly as a result of uncontrolled hunting. A major effort is under way to reintroduce the species to Chad, led by the Environment Agency – Abu Dhabi, in collaboration with government of Chad and the NGO Sahara Conservation. Currently over 400 free ranging animals are present. Small populations have been releaesed in protected areas in Senegal and Tunisia as part of longer term reintroduction porgrammes. A reassessment of the Red List status was initiated in 2022 to reflect the latest situation. Once found across the deserts of the Arabian Peninsula, but became extinct in the wild around 1972, mainly due to uncontrolled hunting and. A major effort took place within and outside the region to breed animals in captivity. The first reintroduction took place in 1982 in Oman. Subsequently they have been reintroduced to Saudi Arabia (Mahazat as-Sayd Reserve, from 1990; Uruq Bani Ma’arid Reserve from 1995); Israel (three sites, from 1997); United Arab Emirates (Arabian Oryx Reserve, Abu Dhabi, from 2007), and Jordan (Wadi Rum, from 2014). These populations are estimated to number more than 1100. There is a small population on Hawar Island, Bahrain, and large managed or semi-managed populations at several sites in Qatar, UAE, and Saudi Arabia. There are several thousand in captivity in private collections in the region and in zoos around the world. In 2011 the Arabian Oryx became the first species extinct in the wild to be reclassified as Vulnerable on the IUCN Red List. Found in arid plains of eastern Africa from Ethiopia south to northern Tanzania. There are two subspecies O. b. beisa and O. b. callotis, Fringe-eared Oryx. Found in Botswana, Namibia, South Africa, Zimbabwe, and extreme southern Angola. It has been introduced to New Mexico, USA. Numbers in Africa are estimated at more than 300,000 and stable. Found across west, central, east, and southern Africa in scrub and dry woodland. It seems to be more numerous in West and Central Africa than in East and Southern Africa. It is tallest species of antelope. This species was endemic to the Cape Region of South Africa, but became extinct around 1799. This is a species of savanna woodlands, occurring from the Shimba Hills, Kenya, south to Tanzania and northern Mozambique; Malawi, southern DRC, Zambia, Zimbabwe, and northern South Africa. Giant sable H. niger vardoni is restricted to a small area of Angola and was very close to extinction. It is slowly increasing in numbers, thanks to dedicated conservation efforts. Numbers in Kenya are also very small, but other populations are more numerous. The Sable’s long curving horns are much-sought hunting trophy and the species is raised on many game farms. 6 species in sub-Saharan Africa. The four current populations are disjunct: in Kenya-northern Tanzania; southern Tanzania-northern Mozambique; Luangwa Valley, Zambia; and southern Africa (Angola, western Zambia, southern Mozambique, Botswana and northern South Africa). This species is one of the best-known antelopes due to its mass migrations on the Serengeti and Maasai Mara plains. Large scale migration also takes place across Zambia’s Liuwa Plains. Endemic to the high plains (‘Highveld) of South Africa, but declined almost to the point of extinction by the end of the 19th century. The population recovered, initially due to efforts by game farmers. It prefers open, short-grass plains. Hartebeests once occurred across almost the whole of Africa. Seven subspecies are recognised, differentiated by coat colour and horn shape, but some hybrid zones exist. The North African subspecies A. b. buselaphus is Extinct. Western Hartebeest A. b. major occurs in the West African savannah zone from Senegal to south-west Chad. Lelwel Hartebesst A. b. lelwel occurs from eastern Chad to Kenya. Tora hartebeest A. b. tora formerly occurred in Sudan, Eritrea, and Ethiopia, but it disappeared from most sites in the 1980s and no animals have been seen for at least 20 years and this subspecies is likely to be extinct. Swayne’s hartebeest A. b. swaynei once occurred in large numbers across Somaliland and Ethiopia but underwent a severe decline due to rinderpest and it is now confined to three small sites in Ethiopia. Coke’s Hartebeest, or Kongoni A. b. cokii, is still common in the Serengeti-Maasai Mara ecosystem of Kenya and Tanzania. Lichtenstein’s Hartebeest A. b. lichtentseini (southern Tanzania, Zambia, Zimbabwe, and Mozambique) and Red Hartebeest A. b. caama (southern Angola, Namibia, Botswana, South Africa) have extensive ranges and are stable. This is a large, plains antelope with five subspecies in sub-Saharan Africa. Korrigum D. l. korrigum is now reduced to three very isolated populations, one in Pendjari N.P., Benin, and two in northern Cameroon. Tiang D. l. tiang occurs in Central Africa from Chad to South Sudan, where it still survives in relatively large numbers that make annual migrations in response to the flood cycle of the river Nile. Topi D. l. jimela lives in the East African plains. Coastal topi D. l. topi has a small range along the coastal plain of Kenya, extending into south-east Somalia. Bangwelu tsessebe D. l. superstes is confined to the Bangwelu swamps of Zambia. Tsessebe D. l. lunatus occurs widely in southern Africa, from Angola, Zimbabwe, southern Democratic Republic of Congo to South Africa. Two closely related but distinctive subspecies in southern Africa. Bontebok D. p. pygargus is endemic to a small part of the Cape region in South Africa, though it has been introduced to many sites outside its indigenous range. Bontebok D. p. phillipsi has a more extensive range in South Africa and has been introduced to Botswana, Namibia, and Zimbabwe. The Hirola is a survivor of an early evolutionary lineage. It has a small range in south-east Kenya, on the north side of the Tana River and at least formerly, extending into adjoining areas of Somalia. The 2021 aerial census conducted by Kenya Wildlife Service estimated 470 individuals. There is also a small translocated population in Tsavo East N.P. 6 species in sub-Saharan Africa Widely distributed across West, Central, and the northern part of East Africa. It occupies grasslands and floodplains, rarely occurring far from water. Occurs across the southern parts of Central and East Africa, south to eastern South Africa. It prefers similar habitats to those of Bohor Reedbuck. This species is distributed in three very distantly separated populations, each recognised as a subspecies. The largest population, R. f. fulvorufula, occurs in southern Africa. Chanler’s Mountain Reedbuck R. f. chanleri is scattered several sites in Kenya, Tanzania, and Ethiopia. Western Mountain Reedbuck R. f. adamuae occupies a small range in the Adamawa highlands along the border between Nigeria and Cameroon. Widely distributed across Werst, East, and parts of Central Africa, south to Botswana and eastern South Africa. They prefer habitats close to water and occur in grasslands, thickets, and light woodland. Two subspecies are usually recognised. Kob occur across West and Central Africa, east to the Sudd swamps of South Sudan and south to Uganda and north-east DRC. Western Kob K. k. kob is distributed from Senegal to the Central African Republic. White-eared Kob K. k. leucotis is restricted to the Sudd ecosystem of South Sudan and the Gambella region of Ethiopia, with occasional occurrence in Kidepo Valley, Uganda. Uganda Kob K. k. thomasi has a small range in Uganda and the Garamba region of DRC. White-eared kob still occur in large numbers, despite heavy hunting pressure, and they undertake mass migrations in response to flood cycles of the River Nile. They occur in floodplain grasslands and wetland margins This species is closely related to kob, and has a relatively small range in Tanzania, Zambia, and northern Botswana. Puku occupy similar habitats to the kob. Lechwe are closely associated with floodplains, wetland habitats and margins of south-central Africa. These habitat preferences result in isolated populations and four subspecies are recognised, in the Okavango region, the Bangwelu swamps and Kafue flats (Zambia), and Upemba wetlands (DRC). Endemic to the Sudd wetlands along the River Nile in South Sudan and the nearby Machar-Gambella marshes on the border with Ethiopia. It inhabits riverine grasslands and move throughout the year in response to flood cycles. Grey Rhebok is endemic to plateau and mountain grasslands of southern Africa. 8 species in sub-Saharan Africa. Large-bodied antelopes. This is a widespread species, occurring from south-east Sudan and Eritrea through East and southern Africa. There is a small, isolated population in eastern Chad-Northern CAR-South Sudan. Its status is favourable in many places, especially in South Africa. This elegant species is found in dry scrub, thornbush, and Acacia-Commiphora woodland in East and North-east Africa, from central Ethiopia through Somaliland, southern Somalia, Kenya, and Tanzania. This is one of the largest antelope species. It is distributed from South Sudan south through East and southern Africa, including on many game farms and ranches outside their historical range. They occur in plains, dry scrub, light woodland, and montane grassland, such as on Mount Kilimanjaro. They form large and small herds. Three subspecies have been described based on variations on coat colour and patterns, but the genetic evidence indicates that only two may be valid. Once occurred across the West and Central African savanna zone from Senegal on the Atlantic, east to South Sudan. Now, only a small remnant population of the Western subspecies T. d. derbianus is found in Niokolo N.P., Senegal, with semi-captive populations in Bandia and Fathala reserves. The eastern subspecies is found in Cameroon (Bénoué-Faro-Bouba Njida-ecosystem), Central African Republic (Chinko Conservation Area and other sites) and South Sudan. Two subspecies are recognized. Western Bongo T. e. eurycerus is distributed in lowland rain forest and forest margins in West and Central Africa. Eastern, or Mountain Bongo T. e. isaaci is restricted to montane forests of Kenya and formerly Uganda. Currently, only around 100 animals remain in the wild, in five isolated subpopulations. Reintroduction and reinforcement efforts are under way on Mount Kenya and at other sites. It is a handsome species, kept in many zoos. Endemic to Ethiopia. Its stronghold is in the Bale Mountains and the nearby Arsi and Chercher mountains. The males’ large, distinctive horns are a sought-after trophy and some of the remaining sites are managed for hunting. The bushbuck has a very extensive distribution across West, Central, East, and the eastern parts of southern Africa. The taxonomy of this species complex. Many ‘subspecies’ have been described to date, differentiated by variations in coat colour and/or striping patterns. For example, in Menelik’s Bushbuck of Ethiopia, the males have a black or very dark coat. The genetic evidence is inconclusive, indicating two main lineages, but also considerable hybridisation. This is smallest of the tragelaphine antelopes. It is shy and elusive, occupying many types of forest, woodland, and scrub. A species of wetlands, papyrus swamps, marshy areas in forests, wetland edges and thickets, occurring across West, Central, and Southern Africa, south to the Okavango delta in Botswana. The nature of its preferred habitat means that populations are generally fragmented. Its hooves are splayed to facilitate movement over wet ground. The tiny population in Senegal is believed to be a remnant of a much more widespread former range in West Africa. Its shy nature and difficult-to-access habitats make surveys and census counts problematic. Occurs in south-eastern Africa from Malawi, Mozambique, Zimbabwe, and eastern South Africa. They have been introduced to game farms and other sites in South Africa outside their natural range. They occupy thickets, woodland, and riverine forest. 2 species in South Asia This is a large species, males weighing up to 280 kg. It is widespread in India and also occurs in the terai zone of Nepal. A small number have recolonised Bangladesh in the last few years. Nilgai means ‘blue bull’ in Hindi, in reference to its colour and appearance, thus granting it some protection on religious grounds. It remains widespread and numerous and in some parts of India it has become a crop pest. Widespread but patchily distributed in India and a few sites in Nepal. It is shy and unobtrusive, preferring areas of light forest, dry scrub, and tall grass. It is the only antelope species to possess two pairs of horns. 12 species, all in sub-Saharan Africa This a rather varied group, occupying a wide range of habitats, though all species bear short, sharp horns. Three species of dik-dik are currently recognized by IUCN but many forms have been named and a comprehensive genetic analysis is desirable to clarify the taxonomy. Widely distributed in grasslands and light woodlands of sub-Saharan Africa from Senegal east to Ethiopia and south to South Africa. Haggard’s Oribi O. o. haggardi is a completely isolated subspecies That occurs along the coast of Kenya into southern Somalia. Endemic to the Horn of Africa. More than 90% of its range is in northern Somalia, including Somaliland, with the rest in southern Djibouti and a very small area of eastern Ethiopia. It is a dainty- looking but robust species, inhabiting rocky hills and screes. A widespread species with a patchy distribution on cliffs and rocky outcrops (its name means ‘cliff jumper’) on the eastern side of Africa from Eritrea to Lesotho, and also in the south-west from Angola to the Cape. An isolated subspecies is found in Nigeria and the Central African Republic. Occurs in two separate populations, one in southern Kenya and northern Tanzania, and another across southern Africa. Occurs in the southern part of East Africa through Zambia, Zimbabwe, Mozambique, and northern South Africa. It prefers habitats with dense cover. This species has a restricted range in the southern and eastern Cape regions of South Africa where it dwells in thickets, scrub, and sometimes long grass. There two populations. One in East Africa in Kenya and Tanzania and the other in Mozambique and the extreme north of South Africa. This tiny antelope lives in the Congo Basin rainforests (two disjunct populations) with a small, isolated population in the Niger Delta. This is the smallest antelope species of the Upper Guinea Forest of West Africa, where it occupies a similar niche to that of N. batesi. There are two main populations, one in East Africa, from southern Somalia to Tanzania, and another, Damaraland Dik-dik, in Namibia and southern Angola. Within East Africa, three forms with differing coat colour and habitats have been identified. It inhabits thickets, scrub, and woodland. This is largest of the dik-diks, found in arid regions of East Africa from Ethiopia, Somaliland, northern Kenya, and South Sudan. Endemic to the Horn of Africa, from Eritrea south through Djibouti, Ethiopia, and Somalia. Its coat colour varies considerably, and several subspecies have been named on this basis, but these are not yet supported by any genetic evidence. This species was once thought to be restricted to the Obbia coast of eastern Somalia, but recently it has also been found in the Ogaden region of Ethiopia. It is not yet known whether its distribution is continuous. The small number of field observations to date indicate that it occurs at much lower densities than Salt’s Dik-dik, with which it co-occurs. 19 species in three genera, all in sub-Saharan Africa These are mainly small to medium species, though three reach 60-80kg. The taxonomy of the duikers has not been confirmed through any genetic analyses, and it is possible that some of the species described should be combined or regarded as subspecies. Duikers have a rather stocky build, are relatively short-legged, and the horns are short or very short. They have strong jaws for crushing seeds and hard fruits; some species are known to eat carrion and even live prey, including frogs. Duikers are solitary or live in pairs. Most species are found in the West and Central African rainforests, with a few others in drier woodlands and montane forest. Several species co-occur in many places, separated ecologically by some combination of size, diurnal or nocturnal habits, food preferences, or habitat niche. They are a very important component of bushmeat and thus for food and local livelihoods. As with the antelopes as a whole, some species remain numerous and widespread, while many are declining and a few are very rare. A large species of highland forests in the Eastern Arc mountains of Tanzania and Mount Kilimanjaro. Populations are small and fragmented. Occupies a small, fragmented range on Zanzibar, the Arabuko-Sokoke forest in coastal Kenya, and Boni and Dodori forests close to the border with Somalia. It may also occur in southern Somalia. It is easily identified by a broad white band across the hindquarters. A nocturnal species with separate populations in the West and Central African rainforests. Distributed in the West African rainforests from Sierra Leone to western Nigeria. It is adaptable and occupies rainforest, farmbush, forest edges and secondary forest. Occurs in marshy and damp parts of rainforests of the Congo Basin and East Africa and montane forests in East Africa. Occurs in forests and dense thickets in East Africa. Closely related to natal Red Duiker and may be conspecific. One of the largest duikers. It is a rare species of the Upper Guinea Forest in Sierra Leone, Liberia, Cote d’Ivoire, and south-east Guinea. Key sites for the species are Sapo N.P. in Liberia and Tai N.P. in Cote d’Ivoire. It also occurs ion the Peninsula Forest Park on the edge of Freetown, Sierra Leone. Distributed along the coast from southern Tanzania to eastern South Africa. There are three subspecies with disjunct ranges. Brooke’s Duiker C. o. brookei is an uncommon inhabitant of the Upper Guinea Forest of West Africa. It may in fact be a separate species. C. o. ogilbyi occurs along the southern part of the Nigeria-Cameroon border, and White-legged Duiker C. o. crusalbum in western Gabon. Inhabits the Central African rainforest on the right (north) bank of the Congo River. This a species of savanna woodland, thick bush, and riverine forests. It occurs across West and Central Africa from Senegal to South Sudan. Sometimes regarded as a subspecies of Peters’ Duiker. Distributed in Central Africa on the left bank of the Congo River, extending east into South Sudan and parts of East Africa. Occurs in two disjunct populations: one in the west from southern Cameroon, south of the Sanaga River, through Gabon, Equatorial Guinea, Republic of Congo, and south-western Central African Republic, and the other to the east in in north-east Democratic Republic of Congo, in the Ituri Forest and north Kivu. There are no confirmed records between these two areas, in the central cuvette of DRC or south of the Congo river. It appears to live at lower densities than other species of duiker in the same habitat. This species has a wide range in West and Central Africa south to northern Angola and Zambia. It is largest of the duikers and has a distinctive triangular yellow patch on its back. This very distinctively marked species is found in the rainforests of West Africa, including montane forests. Occurs in the forests of Central Africa, east of the Niger River, into parts of eastern and southern Africa. It is a common and numerous species, apparently resilient to high levels of harvest. Fills the same role and habitat niche as Blue Duiker in the forests of West Africa. The species was first described in 2011. It is found in the Dahomey Gap of West Africa in more open habitats than the other two species in the genus. So far there are only a few records and very little is known about the species. This species has the widest distribution of all the duikers. It avoids rainforests and open grassland, occupying a wide range of habitats that have enough cover, from arid bush to montane grasslands. Endemic to the Qinghai-Tibet Plateau where it occurs up to 4xxxm. Almost the entire population (99%) is found in China with a small number making seasonal movements into north-eastern Ladakh, India. Horns found in xx of extreme NW Nepal. The underfur, known as shahtoosh, is extraordinarily fine and was traditionally woven into luxury shawls and scarves. Uncontrolled hunting for this high-value product resulted in a serious decline in the 1980s-1990s. Strict protection measures and creation of very large nature reserves by the government of China, international trade controls, and legal bans on import and weaving shahtoosh in India, have resulted in population recovery. It is now estimated to number over 200,000, and has been recategorized in a lower category of risk on the Chinese National Red List and IUCN Red List. This is the only member of the family Antilocapridae. It has many external similarities to the antelopes and occupies a similar ecological niche to the Plains species of Africa. Its horns are branched and have a forward pointing ‘prong’. It is distributed in western north America, from Alberta, Canada, south to northern Mexico. It numbers up to one million. Two forms, Sonoran Pronghorn A. a. sonorensis (Arizona and northern Mexico) and Peninsular Pronghorn A. a. peninsularis (Baja California) are both highly threatened. Found across sub-Saharan Africa in savanna and forest ecosystems. Several subspecies have been described and the taxonomy of the species is currently being re-evaluated. The largest Cape Buffalo can weigh up to 850 kg or more, but forest animals are around half that size. This small member of the family Tragulidae is found in the rainforests of West and Central Africa. It has a distinctive pattern of white spots and stripes which help to distinguish it from the duiker species which occupy the same habitat. Females are larger than males. It is nocturnal and rarely seen far from water. Now reduced to very small populations in the Gobi Desert of south-west Mongolia and north-west China.
When cells of the lung start growing rapidly in an uncontrolled manner, the condition is called lung cancer. Cancer can affect any part of the lung and it’s the leading cause of cancer deaths in both women and men in the United States, Canada, and China. It is most common in adults over age 45, but it can happen to anyone, including small children. Over 1.3 million deaths occur from lung cancer every year. Smoking cigarettes is the leading cause of lung cancer, although it can be caused from a variety of other things such as environmental and work conditions, chemical exposure, family history, or radiation, amongst other causes. There are no “safe” cigarettes on the market, and “low tar” or “no tar” cigarettes are no better for your health. It is best to not smoke, or be around cigarette smoke. Lung cancer affects damaged cells of the lining of the lungs. Tumors begin to form, depriving the bloodstream of an adequate flow of oxygen. Of the tumor is malignant, the tumors can metastasize and spread to other parts of the body. The more is spreads, the harder it is to treat. If the cancer is spread via the lymphatic system, it usually spreads very quickly, to far outreaches of the body. In a nutshell, lung cancer is caused when damage to the core DNA of lung cells are unable to self-heal and the cell is unable to die. The cancer then can overtake these cells, and mutation occurs. Sometimes, people are genetically predisposed to cell mutations, or more likely to develop cancer. Diagnosis and treatment X-rays, CT scans, and PET scans can help to diagnose suspected lung cancer. After initial diagnoses, chemotherapy, radiation, or drugs will be used to treat the cancer. Sometimes, surgery is needed to remove tumors, or damaged sections of the lungs. There are many forms of lung cancer, and the treatment options will not be the same for everyone. - Chest Pain or discomfort - Cough, sometimes accompanied by blood - Shortness of breath - Weight loss (unintentional) - Weakness (generalized) - Throat soreness, loss of voice, or hoarseness - Pneumonia or chronic bronchitis More symptoms may also be present if the cancer has spread to other parts of the body, and will be different depending on where it spreads to. Note that early lung cancer may not cause any symptoms at all, and many patients do not realize they are sick until a progressed stage. It is best to have regular physicals with your doctor, and mention anything abnormal in your breathing patterns or if your coughs develops a mucus or blood accompaniment. For more information about lung cancer, treatment options, or to find a doctor, visit Cancer.org.
Plague is an infectious disease caused by the bacterium Plague has received attention because of its potential use as as biological weapon by terrorists. If intentionally released, the aerosols could cause pneumonic plague in affected It is important for medical personnel and first responders to prepare for such an event and to be aware of any risk of person-to-person infection. The typical route of infection in humans is from a rodent flea carrying the plague bacterium or from handling an infected animal. In the Middle Ages, millions of people died in Europe from plague because human homes were inhabited by flea-infested rats. Though antibiotics are effective against plague today, if an infected person is not treated, the disease causes death. Plague is a disease that is caused by the bacterium, Yersinia pestis. Humans usually get plague after being bitten by a rodent flea that is carrying the plague bacterium or by handling an animal infected with plague. Plague is infamous for killing millions of people in Europe during the Middle Ages. Today, modern antibiotics are effective in treating plague. Without prompt treatment, the disease can cause serious illness In recent years, the fear about terrorist attacks with biological weapons has grown. This article addresses issues related to biological warfare and bioterrorism and gives a concise overview of the role that plague has played in the past and present as a Plague has received much attention because it may be used as a biological weapon. Intentionally released aerosols would cause In order to prepare for such an event, it is important for medical personnel and first responders to form a realistic idea of the risk of person-to-person spread of infection. Historical accounts and contemporary experience show that pneumonic plague is not as contagious as it is commonly believed to be. Persons with plague usually only transmit the infection when the disease is in the endstage, when infected persons cough copious amounts of bloody sputum, and only by means of close contact.
Some computer monitor s, especially some older 'graphics ' displays of high resolution for their time, are sync on green. In order to properly display video received over an analog connection, a conventional CRT monitor needs four streams of information: - Red signal strength - Green signal strength - Blue signal strength - Vertical Retrace (synchronization) signal Typically, on these high-end monitors, there would be four coaxial cable connections; one locking connector and cable for each color signal, plus an additional cable for the sync signal. These would be abbreviated R,G,B and S; If color-coded, the sync cable was typically black or brown. However, some monitors only had three connections; or, similarly, some video cards only put out three signals. In this case, they would use the timing of one of the three color signals to synchronize the retrace interval; typically, the green signal was used (although some monitors/video cards could manually select which signal to use). Thus, only three connectors were needed, similar to Sun's 13W3 standard. This was described as 'sync on green.' One primary advantage of having four connections is that it allowed each isolated cable to carry a single signal, which minimized interference. Interference in the signals would show up as blurred or shifted pixels on the monitor. Sync on green wasn't as good as full RGBS, but it minimized the interference to one signal, which was still much better than a conventional multiline connector such as the 15-pin VGA standard. These cables, while improving over time, typically use much narrower wires in close proximity, meaning it's harder to achieve really high resolutions without some interference. The use of switchboxes doesn't help either, although some are less intrusive than others.
To see the 5 principles below exemplified in the 12 top teaching strategies, download the illustrated PowerPoint presentation so that you can use it at departmental meetings and can add you own to it. A companion file offering another 18 new innovative ideas will be uploaded shortly. 5 principles for effective teaching: a. Lessons being reserved for application not acquisition b. There is a high level of student to student interaction, less mediated through the teacher c. Students have to think about the criteria that make for good history d. Students make and sustain historical claims e. Students are involved in assessing each others’ work using criteria they become familiar with Teaching AS and A2 history [gview file=”https://test.keystagehistory.co.uk/Resources/AS-A2r2.ppt” height=”600px” width=”730px” save=”1″ cache=”0″]
Published23 February 2023at16:23, updated on09 March 2023at14:07 Grace Hopper, also known as “Amazing Grace,” is considered one of the pioneers of computer programming. She was a trailblazer for women in technology, and her contribution to the field of computer science is immeasurable. One of Grace Hopper most significant contributions was her discovery of the first computer bug. When you think of the pioneers of computer science, names like Alan Turing and Ada Lovelace may come to mind. But there’s another name that belongs on that list: Grace Hopper. Calling her the “Mother of Cobol,” she was a pioneer in computer programming . She also played a key role in the development of the first compiler. But perhaps her most enduring legacy is her discovery of the first computer bug. In 1947, Hopper was working on the Harvard Mark II computer when it started malfunctioning. After several hours of searching, she and her team discovered a moth trapped in one of the relays. Hopper removed the moth and taped it to the computer’s logbook. Next to it she wrote “First actual case of bug being found”. While the discovery of the moth may seem trivial, it had a significant impact on the field of computer science. Hopper’s use of the term “bug” to describe a technical problem in a computer system quickly caught on. As a consequence it became the term we use to describe any defect or error in a program. Hopper’s discovery also helped to cement the idea that errors in computer systems could be caused by physical defects in the hardware, rather than just errors in the code. But Hopper’s contributions to computer science didn’t stop with the discovery of the first bug. She went on to become a trailblazer for women in computer science, serving as a role model and mentor for generations of women who followed in her footsteps. She was one of the first female graduates of Yale University’s PhD program in mathematics, and in 1969 she became the first woman to hold the rank of admiral in the U.S. Navy. Hopper’s legacy continues to inspire and motivate those who are working to advance the field of computer science today. In 2016 she received the Presidential Medal of Freedom posthumously, and in 2017 she was honored with a U.S. commemorative stamp. As we celebrate Women’s Day and honor the many women who have made significant contributions to the field of technology, we should remember the remarkable life and legacy of Grace Hopper. Her discovery of the first computer bug and her pioneering work in computer programming paved the way for future generations of women in tech, and her legacy will continue to inspire and motivate us for years to come. Also read our article about 20 Amazing Women Leading Europe’s Tech Revolution Mindquest Connect offers you a collection of articles and innovative content for recruiters and IT professionals. Are you looking for a talent or an IT opportunity? We connect the best profiles to the best offers for a perfect match between business and IT talents.
Used in IP Networks to break up larger networks into multiple smaller subnetworks. It is used to reduce network traffic, Optimized network performance.It helps to identify and isolate network problems easily. CIDR (Classless Interdomain Routing) pronounced as cider for short. It was created for solving the problems introduced by Class A -E addressing scheme. And it is used to prevent the ip address space depletion. CIDR uses a masking technique to determine the target network and eleminating the limitations of classful configuration and preventing the waste of ip address in the classful addressing scheme. VLSM means Variable length subnet mask. It is an advanced form of subnetting that allows subnets of variable lengths to all coexist under one network. The purpose of VLSM is to adjust your simple, same size subnets to better accommodate the size requirements of your physical networks. - IPv6 has larger address space - It supports stateless autoconfiguration. - It has more efficient packet headers than IPv4. - It supports multicasting - More secure than ipv4 - It has additional mobility features and it has integrated qos. This feature is used to generate an ip address without the need of dhcp server. Here Routers send the RAs (Router advertisements) to the network hosts containing first 64 bits of the 128 bit network address. The second half of the address is generated by the host. this allows the host to keep hardware addresses hidden for the security reasons.
The Difference Between Speech and Language Development I once taught a child who was a ‘selective mute’. I had only been a teacher for about five years and had never come across a child who did not speak. She would refrain from speaking all day and would not answer us. She would just stare at us and allow other children to speak for her. As soon as her mother arrived to pick her up in the afternoon, she would begin talking. It astounded me. I referred her to the school Speech Therapist who told me that she was not a priority. What the? Let me explain. There are two aspects of oral language development. The first aspect is speech (expressive language) and can be defined as the ability to articulate sound and speak to be understood. The second aspect is language (receptive language) which is the ability to understand what is being spoken by another person. The Raising Children Network is a fantastic place to find out more about speech and language development from professionals in this field. Speech is how a child makes a sound to make a word. There are many factors that affect a child’s speech. These can be simple age-related, physical issues (for example being tongue tied) and hearing impairments. As a parent, you know your child the best. If you feel that they are having difficulty with their speech development there are plenty of places to go for advice and support. Your local GP or child health nurse is the best place to start. Language is another aspect of Oral Language and is often seen in most situations as more important. The child I spoke about earlier who was a selective mute was not a major concern to the Speech Therapist because her ability to understand language (what is being spoken by another person) was good. She could understand what I was saying to her at school, could follow instructions and could listen to her peers. This meant that although she was having some difficulty with her speech, her ability to understand language still made it possible for her to communicate with others. A checklist is a good place to start and might be able to give you an indication of where your child is up to with their language development. Remember that these are just indicators so don’t stress if your child is exactly at the same developmental level as what is stated. Use this as a guideline and trust your own judgement as a parent. As a parent, you know your child the best. If you feel that they are having difficulty with their language development there are plenty of places to go for advice and support. Your local GP or child health nurse is the best place to start. What Can You Do? There are so many things that you can do at home from when your child is young. We have some ideas here that might be a good start. As an educator and parent, my best advice would be to trust your instinct on how you believe your child is developing. If you have any concerns whatsoever about their speech and language development (or any part of their development), speak to someone. There is your local GP, free health services (child health nurse), your local dentist (oral health has a part to play) and educators (your childcare teacher) who may be able to advise you further if you have any concerns. It is better to get on top of these things early to ensure your child has the best start to their education and learning.
Classroom assessment in the 2000s moved more toward computer-based tests that utilize technology, such as laptops, notebooks and tablets. Student comprehension and subject mastery are also assessed through projects, portfolios and oral examinations. Questions for most standardized tests come from standards developed by each state.Continue Reading With the move to Common Core standards beginning in 2010, assessments in the United States are facing redesigns that allow students to demonstrate critical-thinking and problem-solving skills. Computer-based assessments provide the advantage of automated scoring, removing human error and providing faster access to results. As technology continues to advance, more schools employ this method for its speed and ease of use. Teachers can use data from the computer assessments to design instruction targeted to meet student needs. Teachers also use a multitude of methods in their classrooms for more informal assessments. Performance assessments involve asking students to demonstrate mastery of a concept with a physical performance, such as a speech or presentation. Oral assessments require students to answer a series of questions that measure the depth of their knowledge. Teachers use portfolio assessments to gather representations of students' work over the course of the school year. They also keep records of work that show a student's progress in a particular subject. Project assessments allow students to work alone or together to complete a task that demonstrates their understanding of a concept.Learn more about K-12
Lately, I find the flu capturing my interest. For a long time, I thought the flu was just the flu. Just another virus going around. But it turns out that there’s more than one type of flu… WAY more than one. The influenza virus is divided into three types: - Type A: the most common version of the flu. It is also the most serious — the one that caused flu epidemics throughout history. Influenza A can infect people, birds, pigs, horses, and other animals. - Type B: a milder version of the flu. Also to blame for epidemics in the past, but not quite as deadly as Type A. Influenza B generally only appears in humans. - Type C: more like a mild cold than a true flu. Has never been blamed for a large epidemic. Because influenza A is the Big Bad Wolf of flu viruses, it is the only one with sub-types. These sub-types include low pathogenic (mild) and highly pathogenic (severe) avian flu viruses. Sub-types are divided based on different proteins on the surface of the virus. The flu virus evolves in two ways: antigenic drift and antigenic shift. Antigenic drift means the small, gradual changes that occur in a virus. There are two genes in a flu virus that contain the genetic material used to produce surface proteins. Mutations in these genes can produce new strains of a virus — strains that may slip past antibodies that attack other versions of the same virus. These new strains are often named for the area in which they developed or were first recognized. Antigenic shift is a sudden, major change to create a brand new influenza A sub-virus. This can occur through direct virus transmission from animal (poultry) to human or when human influenza A and animal influenza A mix and create a new virus. Flu pandemics are often the result of antigenic shift — a totally new sub-type appears in the human population and spreads quickly from one person to another.
This half term we will be learning about the changes in humans as we develop into old age. We will be drawing timelines to indicate the stages in growth and development. We will also be working scientifically by researching the gestation periods of other animals and comparing them with humans; by finding out and recording the length and mass of a baby as it grows. We will continue with our topic Macbeth in English lessons to develop our writing, grammar and drama skills. In Maths our focus will be measurement, where we will convert between different units of metric measure and calculate the volume of shapes. To understand the human life cycle |BBC - KS2 Bitesize Science - Human life cycles : Read Human life cycles - Read. There are 6 stages in the human life cycle: Foetus. At this time, a baby is growing inside it's mum's womb. Baby. A baby is born after spending 9 months inside the womb. For support with grammar |BBC - KS2 Bitesize English - Spelling & grammar KS2 English activities, games, tests and notes on spelling and grammar, including how to use punctuation, adjectives, adverbs and nouns For support with measurement |BBC - KS2 Bitesize Maths - Measures : Read A key stage 2 revision and recap resource for Maths measures.
Silence Theme of Power Whether we like it or not, fathers are authority figures. They set the rules (and can ground us if we don't follow them…), but they can also teach us a lot of things. In "Silence," the speaker's father teaches her (and us, indirectly) how to be a "superior person." Superior people also have authority – by definition, they are higher or better than normal people, right? But in the end, it's the speaker's own choice whether she wants to follow her father's advice and wants to have this kind of superiority. Questions About Power - What does the speaker's father mean by "superior"? In your own words, how would you characterize these superior people? Does the father present them positively, or is he being sarcastic? - Do you think the speaker agrees with her father's description of how superior people behave? - Do you imagine the speaker following her father's advice, or does she seem to question it? In other words, do you think she accepts her father's authority here, or does she challenge it? - Why is the cat like superior people? Do superior people also have "mice" that they eat? Why, or why not? Chew on This When the speaker quotes from her father, she is not handing over the poem to him, but rather asserting her own authority. She takes his words and makes them her own, showing that she now has control over when and how her father's words are spoken.
William Wordsworth19th century English romantic poet and poet laureate of England - Birth: April 7, 1770 - Death: April 23, 1850 - Place of Birth: Cockermouth , Cumberland, in the Lake District of northwestern England - Spouse: Mary Hutchinson (Married her in 1802) - Number of Children: Five - Education: Saint John's College, University of Cambridge - Known for: Initiating Romanticism by introducing novel poetic theories and techniques - Wordsworth was the Poet Laureate of England from 1843–1850 "Our birth is but a sleep and a forgetting; The Soul that rises with us, our life's star,Hath had elsewhere its setting,And cometh from afar:Not in entire forgetfulness,And not in utter nakedness,But trailing clouds of grory do we comeFrom God, who is our home:"“Ode: Intimations of Immortality from Recollections of Early Childhood” (1807) Notable WorksLyrical Ballads, with Other Poems (first published in 1778, 2nd edition appeared in 1800) Poems, in Two Volumes (1807) The Excursion (1814) Ecclesiastical Sketches (1822) The Prelude (1850) Did You Know? - William Wordsworth was orphaned at an early age. - Wordsworth suffered from anosmia, an inability to smell. - Although Wordsworth had begun to write poetry while still a schoolboy, none of his poems was published until 1793. Although fresh and original in content, the poems received little notice, and few copies were sold. - His masterpiece "The Prelude" was not published during his lifetime. - Even though his contribution led the tide of Romantic movement in English literature, only a few poets imitated his poetic style. Even his best friend Coleridge modified Wordsworth's poetic theory in the way of creating his own works.
This I Spy lesson plan also includes: - Join to access all included materials Get outside and engage learners in a compare-and-contrast activity about nature, animals, and the environment. The class discusses how to use a Venn diagram to compare two different animals. Then, they make observations of two animals found in the school yard, as they jot down notes in their animal books. Back in class, they use the Internet to further their research on the two animals. The lesson plan concludes as they complete filling out each page in their animal books.
Diversity & Inclusive Teaching (Archived) This teaching guide has been retired. Visit our newly revised guides on this topic, Increasing Inclusivity in the Classroom Teaching Beyond the Gender Binary in the University Classroom - Inclusive Teaching Strategies - Racial, Ethnic and Cultural Diversity - Gender Issues - Sexual Orientation - Annotated Bibliographies - Related Vanderbilt Programs and Centers - Additional Web Resources Both students and faculty at American colleges and universities are becoming increasingly varied in their backgrounds and experiences, reflecting the diversity witnessed in our broader society. The Center for Teaching is committed to supporting diversity at Vanderbilt, particularly as it intersects with the wide range of teaching and learning contexts that occur across the University. The following tips are taken from Barbara Gross Davis’ chapter entitled “Diversity and Complexity in the Classroom: Considerations of Race, Ethnicity and Gender” in her excellent book, Tools for Teaching. We recommend that you read her full text to learn more about the issues and ideas listed below in this broad overview. Davis writes: “There are no universal solutions or specific rules for responding to ethnic, gender, and cultural diversity in the classroom…. Perhaps the overriding principle is to be thoughtful and sensitive….” She recommends that you, the teacher: - Recognize any biases or stereotypes you may have absorbed. - Treat each student as an individual, and respect each student for who he or she is. - Rectify any language patterns or case examples that exclude or demean any groups. - Do your best to be sensitive to terminology that refers to specific ethnic and cultural groups as it changes. - Get a sense of how students feel about the cultural climate in your classroom. Tell them that you want to hear from them if any aspect of the course is making them uncomfortable. - Introduce discussions of diversity at department meetings. - Become more informed about the history and culture of groups other than your own. - Convey the same level of respect and confidence in the abilities of all your students. - Don’t try to “protect” any group of students. Don’t refrain from criticizing the performance of individual students in your class on account of their ethnicity or gender. And be evenhanded in how you acknowledge students’ good work. - Whenever possible, select texts and readings whose language is gender-neutral and free of stereotypes, or cite the shortcomings of material that does not meet these criteria. - Aim for an inclusive curriculum that reflects the perspectives and experiences of a pluralistic society. - Do not assume that all students will recognize cultural, literary or historical references familiar to you. - Bring in guest lecturers to foster diversity in your class. - Give assignments and exams that recognize students’ diverse backgrounds and special interests. Resources to help you achieve an inclusive classroom that fosters diversity are provided below. When instructors attempt to create safe, inclusive classrooms, they should consider multiple factors, including the syllabus, course content, class preparation, their own classroom behavior, and their knowledge of students’ backgrounds and skills. The resources in this section offer concrete strategies to address these factors and improve the learning climate for all students. - Creating Inclusive College Classrooms: An article from the Center for Research on Learning and Teaching at the University of Michigan which addresses five aspects of teaching that influence the inclusivity of a classroom: 1) the course content, 2) the teacher’s assumptions and awareness of multicultural issues in classroom situations, 3) the planning of course sessions, 4) the teacher’s knowledge of students’ backgrounds, and 5) the teacher’s choices, comments and behaviors while teaching. - Teaching for Inclusion: Diversity in the College Classroom: Written and designed by the staff of the Center for Teaching and Learning at UNC, Chapel Hill,this book offers a range of strategies, including quotes from students representing a range of minority groups. - Managing Hot Moments in the Classroom, from the Derek Bok Center at Harvard University, describes how to turn difficult discussions into learning opportunities. The Faculty Teaching Excellence Program (FTEP) at the University of Colorado has compiled a series of faculty essays on diversity in On Diversity in Teaching and Learning: A Compendium. This publication is available for download (as a PDF file) from the FTEP website (scroll down towards the bottom of the page for the download links). The essays in this volume include, among others: - Fostering Diversity in the Classroom: Teaching by Discussion: Ron Billingsley (English) offers 14 practical suggestions for teaching discussion courses (with 15-20 students) and creating an atmosphere in the classroom that embraces diversity. - Fostering Diversity in a Medium-Sized Classroom: Brenda Allen (Communications) outlines seven ways to create an interactive environment in larger classes (with 80-100 students) and thus promote diversity in the classroom. - Developing and Teaching an Inclusive Curriculum: Deborah Flick (Women Studies) uses the scholarship of Peggy McIntosh and Patricia Hill Collins to support a useful syllabus checklist and teaching tips that include techniques to provoke discussion about privilege and stereotypes among students. - The Influence of Attitudes, Feelings and Behavior Toward Diversity on Teaching and Learning: Lerita Coleman (Psychology) encourages instructors to examine their own identity development and self-concept to determine how they feel diversity and bias affect their teaching. She also shares 14 specific teaching tips. - Tips for Teachers: Teaching in Racially Diverse College Classrooms: From TheDerek Bok Center for Teaching and Learning at Harvard University, this helpful checklist addresses concerns about teaching in a multicultural context. Several specific recommendations are given to insure your confidence in the classroom. - Perceptions of Faculty Behavior by Students of Color This link is provided by The Center for Research on Learning and Teaching at the University of Michigan. - Tolerance.org is a principal online destination for people interested in dismantling bigotry and creating, in hate’s stead, communities that value diversity. - Book Review: Race in the Classroom: The Multiplicity of Experience. Derek Bok Center for Teaching and Learning, Harvard University: 1992. - Ten Ways to Fight Hate on Campus - Tips for Teachers: Sensitivity to Women in the Contemporary Classroom: From The Derek Bok Center for Teaching and Learning at Harvard University, this article provides helpful strategies for instructors concerned about gender issues in the classroom. Several specific recommendations given to insure an inclusive environment. - Academic Support for Women in Science and Engineering: Susan Montgomery (Chemical Engineering) and Martha Cohen Barrett (Center for the Study of Higher and Postsecondary Education) present critical factors that have been found to influence the learning experiences of undergraduate women studying science and engineering. They also offer suggestions for improving the academic environment that are applicable to all students. This link is provided by The Center for Research on Learning and Teaching at the University of Michigan. - Book Review: Women Faculty of Color in the White Classroom, edited by Lucila Vargas. - Book Review: Women in the Classroom: Cases for Reflection. Derek Bok Center for Teaching and Learning, Harvard University: 1996. - The Gay, Lesbian and Straight Educators Network(GLSEN): This site provides useful information and resources for educators. - Teaching Students with Disabilities: From a brochure entitled “College Students with Learning Disabilities,” developed by Vanderbilt’s Opportunity Development Center, and from the ODC staff. Both of these bibliographies are hosted by the Center for Research on Learning and Teaching , University of Michigan: - Promoting Diversity in College Classrooms: Edited by Maurianne Adams (New Directions in Teaching and Learning, 1992, vol. 52), this bibliography lists articles that encourage instructors to become conscious of their own identity development and bias to improve their teacher-student interactions in the classroom. Several lessons learned are shared, as well as curricular solutions. - Teaching for Diversity: Edited by Laura Border and Nancy Chism (New Directions in Teaching and Learning, 1992, vol. 49), this bibliography lists articles on topics ranging from the implications of diverse learning styles for instructional design to an ethnographic approach to the feminist classroom. Faculty and TAs exploring issues in diversity in teaching and learning may be interested in the following programs, initiatives and centers at Vanderbilt. They range from service units offering direct assistance to those who are teaching at Vanderbilt, to research and outreach projects that present more indirect links to-but with important implications for–the Vanderbilt classroom. University Programs and Centers - Antoinette Brown Lectures – Vanderbilt University Divinity School Established in 1974, this lectureship brings distinguished women theologians and church leaders to the Divinity School to speak on a variety of concerns for women in ministry. - Bishop Joseph Johnson Black Cultural Center This center, dedicated in 1984, provides educational and cultural programming on the Black experience for the University and Nashville communities, and serves as a support resource for African-descended students. The center’s programs are open to the Vanderbilt and Nashville communities. - Carpenter Program in Religion, Gender and Sexuality Established in 1995, this program fosters conversation about religion, gender, and sexuality by providing education and encouraging communication within and across religious affiliations, ideological bases, and cultural contexts. The program facilitates courses of study, workshops, lectures, and provides consultation and information services. Their website includes news items on gender, religion, and sexuality, as well as a list of syllabi, papers and student projects. - The Office for Diversity Affairs This office administers an active recruitment program that involves visits by students and staff to other campuses; encourages contacts between applicants and matriculating students; and arranges visits to the Vanderbilt campus for newly accepted under- represented minority applicants. This site also links to related programs fostering diversity at the School of Medicine, such as the Vanderbilt Bridges Program and theMeharry – Vanderbilt Alliance . - The LGBTQI Resoure Office provides information about a variety of organizations that serve the needs of gay, straight, lesbian, bisexual, and transgender undergraduates, graduates, faculty, and staff. - Margaret Cuninggim Women’s Center Providing activities on women, gender equity, and feminism through lectures, This center sponsors campus workshops and special events. These programs are open to students, faculty and staff, as well as interested members of the local community. The center’s 2000-volume library houses the only collection on campus devoted to gender and feminism, and is available for reference, research and general reading. - Equal Opportunity, Affirmative Action, and Disability Services Department This center, established in 1977, is Vanderbilt University’s equal opportunity, affirmative action, and disability services office. The center’s mission is to take a proactive stance in assisting the University with the interpretation, understanding, and application of federal and state laws and regulations which impose special obligations in the areas of equal opportunity and affirmative action. - Project Dialogue Project Dialogue is a year-long, University-wide program to involve the entire Vanderbilt community in public debate and discussion, and to connect classroom learning with larger societal issues. Project Dialogue has been run every other year since 1989, each year centering on a particular theme. Recent speakers have included Naomi Wolf, Cornel West, Arthur Schlesinger, Jr., Oliver Sacks, and Barbara Ehrenreich. - Robert Penn Warren Center for the Humanities The Robert Penn Warren Center for the Humanities promotes interdisciplinary research and study in the humanities and social sciences, and, when appropriate, the natural sciences. The center’s programs are designed to intensify and increase interdisciplinary discussion of academic, social, and cultural issues. Recent and upcoming fellows program themes include: “Memory, Identity, and Political Action,” “Constructions, Deconstructions, and Destructions in Nature,” and “Gender, Sexuality, and Cultural Politics.” Lectures, conferences, and special programs include: Race and Wealth Disparity in 21st Century America, a Gender and Sexuality Lecture Series, Rethinking the Americas: Crossing Borders and Disciplines, Diversity in Learning/ Learning and Diversity, Feminist Dialogues, and the Social Construction of the Body. - The Office of the University Chaplain This office offers programs to students to help them understand their own faith and the faith of others, clarify their values, and develop a sense of social responsibility. The office also provides leadership for Project Dialogue, as well as the Martin Luther King Jr. Commemorative Series and the Holocaust Lecture Series. International Services and Programs - English Language Center This center is a teaching institute offering noncredit English language courses for speakers of other languages. The center provides English instruction to learners at all levels of proficiency to enable them to achieve their academic, professional, and social goals. - International Student and Scholar Services This office offers programs and services to assist international students and scholars across the university. Student Offices and Programs - Office of Leadership Development and Intercultural Affairs – Dean of Students This office initiates, develops, and implements multicultural education in the areas of policies, services, and programs for the entire student body. - International Student Organizations Lists information on organizations sponsoring programs and offering support systems for international students at Vanderbilt. - Religious Student Organizations Lists information on a range of fellowship and worship services provided by Vanderbilt’s diverse religious community. - Representative Student Organizations Lists information on a range of additional student groups, such as the Asian-American Student Association, Black Student Alliance, etc. - Girls and Science Camp This camp was established at Vanderbilt University in the summer of 1999 in response to the gender differences in science achievement found in high school. Its goals are to engage girls in science activities, to foster confidence in science achievement, and to encourage girls’ enrollment in high school science courses. - Diversity Web The Association of American Colleges and Universities and the University of Maryland at College Park have designed DiversityWeb to connect, amplify and multiply campus diversity efforts through a central location on the Web. DiversityWeb is part of a larger communications initiative entitled Diversity Works-a family of projects providing resources to colleges and universities to support diversity as a crucial educational priority. Supported by grants from the Ford Foundation, this initiative is designed to create new pathways for diversity collaboration and connection, via the World Wide Web and more traditional forms of print communication. - Multicultural Pavilion The Multicultural Pavilion strives to provide resources for educators, students, and activists to explore and discuss multicultural education; facilitate opportunities for educators to work toward self-awareness and development; and provide forums for educators to interact and collaborate toward a critical, transformative approach to multicultural education. The Pavilion was created in 1995 at the University of Virginia. - Teaching for Diversity and Inclusiveness in Science, Technology, Engineering and Math (STEM): Angela Linse, Temple University; Wayne Jacobson, University of Washington; & Lois Reddick, New York University, propose in this essay that STEM instructors use a model adapted from research on problem solving to explore the lack of diversity in the STEM student population. As expert problem solvers, STEM instructors are well-prepared to begin addressing this issue in their own courses and programs. Articles from CFT Newsletter: - Teaching from the Outside In: An article summarizing interviews of Vanderbilt faculty asked to reflect on their experience of teaching from “the outside in.” - An International Perspective: An interview of Nikolaos Galatos, a Vanderbilt graduate student from Greece who won last year’s B. F. Bryant Prize for Excellence in Teaching for outstanding teaching by a mathematics graduate student. - From the Student’s View: Difference: An article summarizing interviews of several Vanderbilt undergraduate students asked to reflect on their experience as students in a course or two in which the instructor was in some significant way different from most of the students in the class.
Information about CASH FLOW Cash Flow is the movement of cash and its equivalents. It includes the inflow and the outflow of cash during a particular period. All transactions which lead to increase in cash and cash equivalents are classified as inflows of cash and all those transactions which lead to decrease in cash and cash equivalents are classified as outflows of cash. Cashflow statement, therefore, is a statement that shows flow of cash and cash equivalents during period. OBJECTIVE OF CASH FLOW STATEMENTS: Cash Flow statement is prepared with an objective to high light the sources and uses of cash and cash equivalents for a period. Cash flow statement is classified under operating activities and financing activities. It shows the net increase or net decrease of cash and cash equivalents under each activity. USES OF CASH FLOW SATEMENT: - Short Term Planning. Cash Flow Statement gives information regarding sources and application of cash and cash equivalents for a specific period so that it becomes easier to plan investment, operating and financial needs of an enterprise. - Cash Flow helps to understand Liquidity and Solvency. Solvency is the ability of the business to pay its current liabilities. Quartely or monthly cash flow statements help to as certain liquidity in a better way. Financial institutions, like banks, mostly prefer Cash Flow statement to analyse liquidity. - Efficient cash Management. Cash Flow Statement provides information relating to surplus or deficit of cash. An enterprise, therefore, can decide about the short term investment of the surplus and can arrange short term credit in case of deficit. - Prediction of sickness. Continuous cash deficit is an indication of sickness. - Comparative study. A Comparison of the cash flow for the previous year with the budgeted figures of the same year will indicate as to what extent the cash resources of the business were generated and applied according to the plan. It is, therefore useful for the management to prepare cash budget. - Reasons for Cash position. Cash Flow statement explains the reasons for lower and higher cash balance with the firm. Sometimes, a lower cash balance is found inspite of heavy profits or a higher cash balance is found inspitte of lower profits. LIMITATIONS OF CASH FLOW STATEMENT Through the cashflow Statement is a very useful tool of financial analysis, yet it has its own limitations which must be kept in mind at the time of its use. These limitations are: - Non-Cash Transactions are ignored. - Not a Substitute for Income Statementt. - Not a test of Total Financial Position. - Historical in Nature.
Physicists at the Kavli Institute for Theoretical Physics recently designed a computer system that details characteristics of quantum entanglement, a state in which a pair of electron spins are entangled with one another. Entanglement occurs when particles, such as photons or electrons, interact with one another and then separate. Even if separated over a large distance, the action of one affects the other, which may open up a new means of secure and quick communication. Electrons in a pair have what is called spin, in which one electron points up and the other points down. Similar to tiny magnets, the pairs have a north pole and a south pole. With these two electrons, a nonclassical ‘entangled state’ can be prepared. Although it is not known whether the electron spin points up or down, measuring one gives knowledge of the spin of the other. Leon Balents, a physics professor in the KITP and author of a paper detailing quantum entanglement published in the journal Nature Physics, said that quantum entanglement offers certainty of the state of electrons in a pair, regardless of the distance between the separated pairs. “You can form an entangled spin and you can send one to the Moon and one to Mars. Someone measures the first particle on the Earth and if it is up, they know for sure that the person measuring on Mars will definitely measure it down,” Balents said. “Somehow there is some kind of action that happens in quantum entanglement that can be used to correlate information in different places.” Balents’ group is using a large scale, up to 1023, of electrons entangled with one another into a state called quantum spin liquid, the holy grail of quantum physics for its possibilities in communication. Zhenghan Wang, a researcher with Microsoft Station Q at UCSB who worked on the mathematics of the project and co-authored Balents’ paper, said that the new computer was designed to analyze the quantum spin liquid. “It is very abstract mathematics. In terms of those quantum phases of physics, it is very difficult to compute. It is just too complicated for classical computers to compute. So we have basically found this one that can do it,” Wang said. “You make assumptions. From a theoretical question, I did not realize how big this problem is for condensed physics.” According to Balents, their research on quantum spin liquid may lead to more secure and large-scale communication. “Photons pairs can be used to carry out perfectly secure communication. In ordinary communication, someone can always tap into the signal pull of a little current and, in principle, listen in [on] what you are doing,” Balents said. “Secure cryptic communication is hard to break but not impossible. As computers get better and better, the cryptics have to be better. With quantum entanglement, it is possible to create an algorithm which in principle is impossible to break.” A version of this article appeared on page 5 of January 15th, 2013’s print edition of the Nexus.
High-level radioactive wastes are the highly radioactive materials produced as a byproduct of the reactions that occur inside nuclear reactors. High-level wastes take one of two forms: - Spent (used) reactor fuel when it is accepted for disposal - Waste materials remaining after spent fuel is reprocessed Spent nuclear fuel is used fuel from a reactor that is no longer efficient in creating electricity, because its fission process has slowed. However, it is still thermally hot, highly radioactive, and potentially harmful. Until a permanent disposal repository for spent nuclear fuel is built, licensees must safely store this fuel at their reactors. Reprocessing extracts isotopes from spent fuel that can be used again as reactor fuel. Commercial reprocessing is currently not practiced in the United States, although it has been allowed in the past. However, significant quantities of high-level radioactive waste are produced by the defense reprocessing programs at Department of Energy (DOE) facilities, such as Hanford, Washington, and Savannah River, South Carolina, and by commercial reprocessing operations at West Valley, New York. These wastes, which are generally managed by DOE, are not regulated by NRC. However they must be included in any high-level radioactive waste disposal plans, along with all high-level waste from spent reactor fuel. Because of their highly radioactive fission products, high-level waste and spent fuel must be handled and stored with care. Since the only way radioactive waste finally becomes harmless is through decay, which for high-level wastes can take hundreds of thousands of years, the wastes must be stored and finally disposed of in a way that provides adequate protection of the public for a very long time.
Finding blood in your vomit can be a scary thing. It’s not a common occurrence for most people, so they often have no idea what is causing this particular type of reaction. There are a range of potential triggers for vomiting blood, some of which can be quite serious, so it’s important to understand what to look for and how to respond if it happens. Signs And Symptoms The condition of vomiting blood is called hematemesis. This occurs when a significant amount of blood appears in a person’s vomit. The blood is often bright red or dark red in color. However, it may also appear as black or dark brown, with a similar appearance to coffee grounds. Vomiting blood can refer to vomit that contains blood mixed with other materials, such as food, or solely blood in the vomit. It’s important to note that spitting up or coughing up small flecks or streaks of blood is not considered to be hematemesis. This particular condition refers only to large amounts of noticeable blood in vomit. There are several possible causes for vomiting blood, many of which are quite serious or potentially fatal. Because vomiting is the forcing of the stomach contents up through the esophagus, many of the causes of vomiting blood originate in either the stomach or the esophagus. Potential causes and triggers include: - Prolonged or vigorous vomiting - Swallowing blood - Taking aspirin or non-steroidal anti-inflammatory drugs (NSAIDs) - Peptic ulcers - Bleeding ulcers in the stomach, first part of the small intestine or esophagus - High blood pressure in the portal vein - Defects in the blood vessels of the gastrointestinal tract - Inflammation of the pancreas (pancreatitis) - Inflammation of the stomach (gastritis) - Inflammation of the esophagus (esophagitis) - Inflammation of the first part of the small intestine (duodenitis) - A tear in the esophagus due to vomiting or coughing - Breakdown of the tissue lining in the stomach (gastric erosions) - Enlarged veins in the esophagus - Cirrhosis (scarring of the liver) - Benign tumors in the stomach or esophagus - Alcoholic hepatitis - Pancreatic cancer - Stomach cancer - Esophageal cancer - Acute liver failure The causes listed above typically apply more to adults than to children. However, children and infants may also be susceptible to vomiting in the blood. Some of the possible causes for vomiting blood in children and infants include: - Birth defects - Swallowing blood - Swallowing an object - Blood clotting disorders Seeking Medical Attention Because so many of the possible causes of vomiting blood are serious or life-threatening, it is important to seek medical attention right away if you experience this condition. It’s important to find out what the trigger for the vomited blood is in order to treat the underlying cause and prevent significant blood loss or other complications. In certain cases, you may require emergency medical assistance when vomiting blood. Signs to watch for are dizziness, lightheadedness, rapid or shallow breathing, fainting, confusion, blurred vision, nausea, low urine output and cold, clammy skin. If you experience any of these symptoms when vomiting blood, call 911 right away as the causes for your condition may be very serious. When seeking medical treatment for this condition, doctors will ask a range of questions to help determine the cause of the blood in the vomit. Common subject matter for these questions include the color of the blood, the amount of blood in the vomit, other symptoms are present, any medications have recently been taken and whether the patient has recently undergone any surgery or dental work. Treatment for the condition depends upon what the underlying cause is. In order to determine the trigger, doctors may need to use a series of tests, including x-rays, rectal examinations, blood work or a nuclear medicine scan. In some cases, doctors insert a tube through the nose and into the stomach to look for blood and potential causes for the blood in the vomit. Once a cause has been determined, doctors may use blood transfusions, intravenous fluids, medications to decrease stomach acid to treat the condition. In more serious cases, surgery may be required to repair damage, remove a tumor or perform other work to stop the patient from vomiting blood and to treat the underlying cause.
The “Nazca Lines” captured people’s attention in the 1920’s, when commercial airlines first flew between Lima and Arequipa, in the southern part of Peru. The land, between the Andes on the one side and the ocean on the other, was barren for hundreds of miles. So how did those lines, many of which had clear shapes and recognizable forms, get there? Over the decades many theories, some more realistic than others, were tested about the purpose of the lines (also called geoglyphs). Some people were convinced they were archeological calendars or remnants of Inca roads. Others thought they were irrigation plans marking subterranean water routes. Still others theorized they were an ancient pilgrimage route, and, for some, the only thing that made sense was that they were a means of communicating with aliens. One recent theory that is gaining momentum is that, they were used in rituals, by entire communities. The Nazca Lines Aren’t just Lines The lines are located in one of the driest places on earth, where water is often a scarce commodity. The Nazca people settled there, in a sheltered valley in the Andean foothills, somewhere around 200 B.C., and flourished for several centuries. Ten rivers come down from the Andes into the valley still today; evidence of Nazca settlements dot the terrain around these ribbons of green. The Nazca lines are not only lines. They are shapes, including sharks, orcas, lizards, dogs and monkeys, camelids, and bizarre humanoids, scenes of decapitation and trophy heads. They also include lots of geometric shapes, including trapezoids, triangles, and intersecting lines. The Nazca People First really researched only after world war two, recent studies have revealed new meanings for the lines. Researchers believe entire communities too part in the creation of the geoglyphs, which were made by removing the rock on the surface, to reveal the lighter, dry sand underneath. The spiritual capital of the early Nazca was Cahuachi, a site first excavated in the 1950’s. It was an almost 400-acre complex with an adobe pyramid, broad plazas, several large temples, and a network of corridors and staircases. The Nazca managed their limited resources for hundreds of years. For instance, they had a sophisticated system of water delivery and conservation, and they worked to protect the fragile substructure of the soil, planting seeds one at a time instead of plowing. There is also evidence that they recycled their garbage and used it as building material. As their population grew, however, the Nazca may have lost the rationale behind some of their methods. In the pursuit of supporting a growing populations, they cleared forests to plant crops. Unfortunately, the trees they cleared were the hurango, a tree with roots that can stretch as deep as 180 feet under the earth to reach subterranean water channels. These trees were vital for stabilizing the soil. Evidence shows severe flooding probably caused the Nazca downfall. The Real Purpose of the Lines Although the Nazca weren’t he only civilizations that created geoglyphs, they are certainly the most well-known. While they flourished, they moved east and west along with the rainfall patterns. Researchers have explored the region from the Andean highlands to the Pacific coast, and have found evidence of Nazca villages, and nearby geoglyphs, almost everywhere they looked. Researchers are now coming together on a conclusion that the lines were used as pathways for ceremonial processions. One of the rationales for that conclusion are the single-line drawings (the spider and the hummingbird). The theory is that a person or people could walk each geoglyph without ever crossing another line. When a lot of peop0le walk over one area regularly, the soil is compacted. Some of the lines (such as the 2,000 foot trapezoid), have been tested and shown to be compacted. This further supports the theory that, though the rituals may have originally been single-person quests, they grew as the population grew, with more people participating. The Nazca Lines are a Sight to Behold Whatever their purpose, the Nazca Lines are a marvel and a must-see for any trip to Peru. We invite you to join us on one of our excursions to see the Lines, as part of a customized visit. Bestperutours.com has received the Certificate of Excellence from Trip Advisor for the second year in a row. Book your tour to see Peru, and the Nazca Lines here.
For most shark species, spending a day in fresh water would be similar to placing us humans on the moon without a spacesuit. They most likely would not be able to survive due to the inhospitable surrounding environment. One of the main problems that would pose a serious threat to sharks in this case is the process known as osmosis. Osmosis refers to when a fluid moves through a semi-permeable membrane from a solution with a low solute concentration to a solution with a higher solute concentration until there is an equal concentration of liquid on both sides of the membrane. In this case, the dissolved substances involve sodium and chloride. Because sharks evolved in salt water, they are equipped with salty bodies. Even sharks in fresh water contain more than twice the amount of salt and chloride as other fish that are more common among freshwater. In theory, sharks that are placed here should burst like a balloon when it is overfilled with air, given the osmosis effect; however since they urinate a lot they are able to avoid this problem. The sharks take in a lot of extra water however they excrete much of it as urine that is diluted and has a rate of over 20 times that of typical saltwater sharks. What this means is that their kidneys are required to work harder than normal, thus utilizing additional energy. Much like humans that have become accustomed to life in low oxygen regions, sharks in fresh water appear to adapt to what would seem to be formidable conditions. Although there have been several studies over the years that have determined that there are in fact some species of sharks residing in freshwater environments, relatively few sharks spend a substantial amount of time here. In fact, river shark populations are now at dangerous lows. Bull shark’s population numbers are higher since they often move between fresh water and salt water environments. Other species of sharks however, that are more adapted to life in lakes and rivers are faced with having to withstand both natural and human induced problems within their habitats. Problems that these sharks face include changes in temperature, oxygen, mineral content, and other climate changes. Human activities such as damn building, water modifications such as irrigation and the introduction of pollutants to the water all pose a serious threat to these particular species of shark.
|Part of "a series on| Wireless communication, or sometimes simply wireless, is the "transfer of information or "power between two or more points that are not connected by an "electrical conductor. The most common wireless technologies use "radio waves. With radio waves distances can be short, such as a few meters for "Bluetooth or as far as millions of kilometers for "deep-space radio communications. It encompasses various types of fixed, mobile, and portable applications, including "two-way radios, "cellular telephones, "personal digital assistants (PDAs), and "wireless networking. Other examples of applications of radio wireless technology include "GPS units, "garage door openers, wireless "computer mice, "keyboards and "headsets, "headphones, "radio receivers, "satellite television, "broadcast television and "cordless telephones. Somewhat less common methods of achieving wireless communications include the use of other "electromagnetic wireless technologies, such as light, magnetic, or electric fields or the use of sound. The term wireless has been used twice in communications history, with slightly different meaning. It was initially used from about 1890 for the first radio transmitting and receiving technology, as in "wireless telegraphy, until the new word radio replaced it around 1920. The term was revived in the 1980s and 1990s mainly to distinguish digital devices that communicate without wires, such as the examples listed in the previous paragraph, from those that require wires or cables. This became its primary usage in the 2000s, due to the advent of technologies such as "LTE, "LTE-Advanced, "Wi-Fi and "Bluetooth. Wireless operations permit services, such as long-range communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the "telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls, etc.) which use some form of energy (e.g. "radio waves, acoustic energy,) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances. The world's first wireless telephone conversation occurred in 1880, when "Alexander Graham Bell and "Charles Sumner Tainter invented and patented the "photophone, a telephone that conducted audio conversations wirelessly over modulated "light beams (which are narrow projections of "electromagnetic waves). In that distant era, when utilities did not yet exist to provide "electricity and "lasers had not even been imagined in "science fiction, there were no practical applications for their invention, which was highly limited by the availability of both sunlight and good weather. Similar to "free-space optical communication, the photophone also required a clear line of sight between its transmitter and its receiver. It would be several decades before the photophone's principles found their first practical applications in "military communications and later in "fiber-optic communications. A number of wireless electrical signaling schemes including sending electric currents through water and the ground using electrostatic and "electromagnetic induction were investigated for telegraphy in the late 19th century before practical radio systems became available. These included a patented induction system by Thomas Edison allowing a telegraph on a running train to connect with telegraph wires running parallel to the tracks, a "William Preece induction telegraph system for sending messages across bodies of water, and several operational and proposed telegraphy and voice earth conduction systems. The Edison system was used by stranded trains during the "Great Blizzard of 1888 and earth conductive systems found limited use between trenches during "World War I but these systems were never successful economically. In 1894 "Guglielmo Marconi began developing a wireless telegraph system using radio waves, which had been known about since proof of their existence in 1888 by "Heinrich Hertz, but discounted as communication format since they seemed, at the time, to be a short range phenomenon. Marconi soon developed a system that was transmitting signals way beyond distances anyone could have predicted (due in part to the signals bouncing off the then unknown "ionosphere). "Guglielmo Marconi and "Karl Ferdinand Braun were awarded the 1909 "Nobel Prize for Physics for their contribution to this form of wireless telegraphy. Wireless communications can be via: Free-space optical communication (FSO) is an "optical communication technology that uses light propagating in free space to transmit wirelessly data for "telecommunications or "computer networking. "Free space" means the light beams travel through the open air or outer space. This contrasts with other communication technologies that use light beams traveling through "transmission lines such as "optical fiber or dielectric "light pipes". The technology is useful where physical connections are impractical due to high costs or other considerations. For example, free space optical links are used in cities between office buildings which are not wired for networking, where the cost of running cable through the building and under the street would be prohibitive. Another widely used example is "consumer IR devices such as "remote controls and IrDA ("Infrared Data Association) networking, which is used as an alternative to "WiFi networking to allow laptops, PDAs, printers, and digital cameras to exchange data. Sonic, especially "ultrasonic short range communication involves the transmission and reception of sound. "Electromagnetic induction has short range communication and power. This has been used in biomedical situations such as pacemakers, as well as for short-range Rfid tags. Light, colors, AM and FM radio, and electronic devices make use of the "electromagnetic spectrum. The frequencies of the "radio spectrum that are available for use for communication are treated as a public resource and are regulated by national organizations such as the "Federal Communications Commission in the USA, or "Ofcom in the United Kingdom, or “international as "ITU-R”, or European as "ETSI. This determines which frequency ranges can be used for what purpose and by whom. In the absence of such control or alternative arrangements such as a privatized electromagnetic spectrum, chaos might result if, for example, airlines did not have specific frequencies to work under and an "amateur radio operator were interfering with the pilot's ability to land an "aircraft. Wireless communication spans the spectrum from 9 kHz to 300 GHz.["citation needed] One of the best-known examples of wireless technology is the "mobile phone, also known as a cellular phone, with more than 6.6 billion mobile cellular subscriptions worldwide as of the end of 2010. These wireless phones use radio waves from signal-transmission towers to enable their users to make phone calls from many locations worldwide. They can be used within range of the "mobile telephone site used to house the equipment required to transmit and receive the "radio signals from these instruments. Wireless data communications allows "wireless networking between "desktop computers, "laptops, "tablet computers, "cell phones and other related devices. The various available technologies differ in local availability, coverage range and performance, and in some circumstances users employ multiple connection types and switch between them using connection manager software or a "mobile VPN to handle the multiple connections as a secure, single "virtual network. Supporting technologies include: Wireless data communications are used to span a distance beyond the capabilities of typical cabling in "point-to-point communication or "point-to-multipoint communication, to provide a backup communications link in case of normal network failure, to link portable or temporary workstations, to overcome situations where normal cabling is difficult or financially impractical, or to remotely connect mobile users or networks. Periphery devices in computing can also be connected wirelessly as part of a Wi-Fi network or directly by optical infer-red, "Bluetooth or "Wireless USB. Originally these units used bulky, highly local transceivers to mediate between a computer and a keyboard and mouse; however, more recent generations have used small, higher-quality devices. A battery powers computer interface devices such as a keyboard or mouse and send signals to a receiver through a USB port by the way of an optical or radio frequency (RF) receiver. A RF design makes it possible to expand the range of efficient use, usually up to 10 feet but distance, physical obstacles, competing signals, and even human bodies can all degrade the signal quality. Concerns about the security of wireless keyboards arose at the end of 2007, when it was revealed that Microsoft's implementation of encryption in some of its 27 MHz models was highly insecure. Wireless energy transfer is a process whereby electrical energy is transmitted from a power source to an electrical load (Computer Load) that does not have a built-in power source, without the use of interconnecting wires. There are two different fundamental methods for wireless energy transfer. They can be transferred using either far-field methods that involve beaming power/lasers, radio or microwave transmissions or near-field using induction. Both methods utilize electromagnetism and magnetic fields. New wireless technologies, such as mobile body area networks (MBAN), have the capability to monitor blood pressure, heart rate, oxygen level and body temperature. The MBAN works by sending low powered wireless signals to receivers that feed into nursing stations or monitoring sites. This technology helps with the intentional and unintentional risk of infection or disconnection that arise from wired connections. |Look up wireless in Wiktionary, the free dictionary.|
Presbyopia (Greek word "presbys" (πρ?σβυς), meaning "old person") describes the condition where the eye exhibits a progressively diminished ability to focus on near objects with age. Presbyopia's exact mechanisms are not known with certainty, however, the research evidence most strongly supports a loss of elasticity of the crystalline lens, although changes in the lens's curvature from continual growth and loss of power of the ciliary muscles (the muscles that bend and straighten the lens) have also been postulated as its cause. Similar to grey hair and wrinkles, presbyopia is a symptom caused by the natural course of aging. The first symptoms (described below) are usually first noticed between the ages of 40-50. The ability to focus on near objects declines throughout life, from an accommodation of about 20 dioptres (ability to focus at 50 mm away) in a child to 10 dioptres at 25 (100 mm) and leveling off at 0.5 to 1 dioptre at age 60 (ability to focus down to 1-2 meters only). The first symptoms most people notice are, difficulty reading fine print, particularly in low light conditions, eyestrain when reading for long periods, blur at near or momentarily blurred vision when transitioning between viewing distances. Many advanced presbyopes complain that their arms have become "too short" to hold reading material at a comfortable distance. Presbyopia, like other focus defects, becomes much less noticeable in bright sunlight. This is not the result of any mysterious 'healing effect' but just the consequence of the iris closing to a pinhole, so that depth of focus, regardless of actual ability to focus, is greatly enhanced, as in a pinhole camera which produces images without any lens at all. Another way of putting this is to say that the circle of confusion, or blurredness of image, is reduced, without improving focusing. A delayed onset of seeking correction for presbyopia has been found among those with certain professions and those with miotic pupils. In particular, farmers and housewives seek correction later, whereas service workers and construction workers seek eyesight correction earlier. Presbyopia is not routinely curable - though tentative steps toward a possible cure suggest that this may be possible - but the loss of focusing ability can be compensated for by corrective lenses including eyeglasses or contact lenses. In subjects with other refractory problems, Convex lenses are used. In some cases, the addition of bifocals to an existing lens prescription is sufficient. As the ability to change focus worsens, the prescription needs to be changed accordingly. Around the age of 65, the eyes have usually lost most of their elasticity. However, it will still be possible to read with the help of the appropriate prescription. Some may find it necessary to hold reading materials farther away, or require larger print and more light to read by. People who do not need glasses for distance vision may only need half glasses or reading glasses. Another approach is TruFocals, where the user moves a slider to choose between focusing on near and far objects. In order to reduce the need for bifocals or reading glasses, some people choose contact lenses to correct one eye for near and one eye for far with a method called "monovision". Monovision sometimes interferes with depth perception. There are also newer bifocal or multifocal contact lenses that attempt to correct both near and far vision with the same lens. New surgical procedures may also provide solutions for those who do not want to wear glasses or contacts, including the implantation of accommodative intraocular lenses (IOLs).
Amnion vs Chorion | Development, Location and Functions Both amnion and chorion are extra embryonic membranes that protect the embryo and provide it with nutrients for the growth and development throughout the intrauterine life. Amnion is the inner layer that surrounds the amniotic cavity while chorion is the outer layer that covers amnion, yolk sac and the allantois. This article points out the differences between amnion and chorion with regard to their development, location and the functions. As mentioned above, amnion is an extra embryonic membrane that lines the amniotic cavity. It consists of two layers, where the outer most layer is formed from the mesoderm, and the innermost layer is formed from the ectoderm. Once it is formed at the early pregnancy, it is in contact with the body of the embryo, but 4-5 weeks later amniotic fluid begins to accumulate in between the two layers forming amniotic sac. Amnion does not contain any vessels or nerves but do contain a significant amount of phospholipids as well as enzymes involved in phospholipid hydrolysis. Initially amniotic fluid is mainly secreted from the amnion, but by about 10th week of gestation, it is mainly a transudate of the fetal serum via the skin and the umbilical cord. Amniotic fluid volume increases progressively, but towards the end of pregnancy, there is a rapid fall in the volume. The main functions of the amniotic fluid are to protect the fetus from mechanical injury, permit movement of the fetus and prevent contractures, aid in the development of the fetal lung and prevent adhesion formation between fetus and amnion. Amnion is present in birds, reptiles and in mammals. Chorion is an extra embryonic membrane that covers the embryo and the other membranes. It is formed from extra embryonic mesoderm with two layers of trophoblasts. As in amnion it does not contain any vessels or nerves but contain a significant amount of phospholipids and enzymes involved in phospholipid hydrolysis. The chorionic villi, which are finger like processes that emerge from the chorion, invade the endometrium and entrusted with the task of transferring nutrients from mother to the fetus. Chorionic villi consist of two layers, where the outer layer is formed from the trophoblasts, and the inner layer is formed from the somatic mesoderm. These chorionic villi get vascularized from the mesoderm that carrying branches of the umbilical vessels. Until the end of the second trimester, villi covering the chorion are uniform in size but later they develop unequally. It contributes the formation of the placenta. What is the difference between Amnion and Chorion? • The amnion is the inner membrane that surrounds the amniotic cavity while the chorion is the outer membrane that surrounds the amnion, yolk sac and the allantois. • Amnion is filled with amniotic fluid, which aid in the growth and development of the embryo, while chorion acts as a protective barrier. • The amnion comprises mesoderm and ectoderm while the chorion is made out of trophoblasts and the mesoderm. • Chorion has a finger like processes called chorionic villi.
Insulin is a hormone produced by the pancreas that unlocks the body's cells and lets glucose in. Without insulin, your body would be unable to process the glucose it gets from the carbohydrates in the foods you eat. Some people are born without the ability to produce insulin - a condition called Type I diabetes - and must inject themselves with synthetic insulin several times a day. Other people, the overwhelming majority of diabetics, have a condition known as insulin resistance. This condition is called Type II or adult-onset diabetes. The hormone insulin is an essential part of your endocrine system. It is produced by Beta cells in your pancreas, a vital organ near your liver. Insulin is released whenever your body detects a rise in blood glucose. This rise can be from eating a meal or your body's natural preparation for physical activity or waking up. In people without diabetes, the amount of insulin the pancreas supplies is in direct proportion to the amount of blood glucose in the bloodstream. Insulin prompts cells to take in glucose and blood sugar levels drop to less than 140 milligrams per deciliter (mg/dl) shortly after a meal. In people with diabetes, however, insulin cannot keep up with the ever-increasing amount of glucose in the blood. This occurs because the cells of a diabetic's body become resistant to the effects of insulin. They no longer open up and let the glucose in when stimulated by insulin. The cells remain unnourished and demand more glucose. You eat more, providing more glucose for absorption. Your pancreas produces insulin at a rapid pace but the cells remain insensitive to the available insulin. At this stage diet and exercise can have a profoundly positive effect on insulin resistance. Performing some kind of physical activity an hour after a meal can use up much of the blood glucose still in the bloodstream so it won't remain in your body, wreaking havoc on your circulatory system and organs. As little as 10 minutes of walking, household chores like vacuuming or gardening, or a bicycle ride can greatly reduce blood glucose levels after eating. If your cells become resistant to insulin and you do nothing to alter your diet or lifestyle, after a while your pancreas may give up. Little, if any, insulin is produced and the blood glucose rises even higher. Now you must take additional insulin as diabetes medication and possibly injections. Many doctors also prescribe drugs that decrease the amount of glucose produced by your liver and the amount of glucose absorbed into your body by your intestines. These drugs, like Metformin, lessen your cells' resistance to insulin. Although diabetes is generally considered a genetic disorder, some people with the gene never develop Type II diabetes. Maintaining a healthy weight, eating a diet low in fat and rich in vegetables, plant proteins (like beans and lentils), consuming no or very lean meats, and keeping physically active can reduce or eliminate the risk of developing insulin resistance. According to the American Diabetes Association, managing weight gain in pregnancy is also important. Babies born to obese mothers have a significantly higher risk of developing insulin resistance later in life. Insulin resistance is often associated with obesity, steroid use, pregnancy, decreased liver function, and stress. Insulin resistance is usually a gradual process that can begin decades before diabetes is diagnosed. As long as the pancreas can keep up with the amount of insulin necessary to keep blood glucose at normal levels, you may not even know there is a problem. Your first clue will probably be a rise in post-meal blood sugar, but insulin resistance is usually not discovered until your fasting blood sugar levels have topped 100 mg/dl. Regular medical check-ups are essential. If your family has a history of Type II diabetes, monitoring your fasting and post-meal blood glucose levels can give you valuable information about your potential for developing insulin resistance.
Not surprisingly given the American anti-colonial, anti-imperialist tradition, the acquisition of territories and colonies as outlined by the Treaty of Paris caused considerable debate. An organization known as the Anti-Imperialist League arose in the US, standing in opposition to American expansion and imperialism. Some of the nation's most famous people, including the writer Mark Twain and the philosopher William James, were leading figures in the Anti-Imperialist League. This vocal minority had many points that still smack of good reason today. However, in the late 1890s, their view did not win out. Instead, pro-imperialism, backed by an ideology of jingoism, carried the day. The Treaty of Paris, though signed, still had to be passed by two-thirds of the Senate in 1899. The Democrats had enough votes to block passage of the treaty, and for a while it looked as if Senate deadlock was inevitable. Finally, William Jennings Bryan, a leading Democrat and constant opponent of President McKinley, decided to support the treaty. Convincing several of the Democratic senators to change their mind, Bryan barely got the treaty passed in the Senate on February 6, 1899. In supporting the Treaty of Paris, Bryan had a trick up his sleeve. He knew that if the treaty passed, the nation would see the Republicans, the majority party at the time, as responsible. In the election of 1900, Bryan hoped to run against McKinley on an anti-Imperialist platform, and by passing the treaty, he hoped to associate the Republicans with Imperialism. Bryan expected imperialism to quickly become unpopular, giving the Democrats an issue to criticize the Republicans over. Unfortunately for Bryan, not enough voters were upset about imperialism by 1900 to aid his cause: he still lost to McKinley. Bryan also appeared to vote as he did for ideological reasons reminiscent of British patriarchal colonialism: he suggested that the sooner the US annexed the Philippines, Guam, and Puerto Rico, the sooner the US could prepare them for independence. The annexation of the Philippines caused major problems, however. The Filipinos had fought with the Americans against the Spanish, thinking that the Americans were there to liberate the Philippines in the same way they were liberating Cuba. When hoped for freedom failed to materialize and the Americans did not go home, the Filipinos felt betrayed. On Jan 23, 1899, the Filipinos proclaimed an independent republic and elected long-time nationalist Emilio Aguinaldo president. The US sent in reinforcements to put down this "rogue" government. Fighting against the Filipino nationalists they had fought alongside months earlier, the US endured two harsh years of battle. Aguinaldo's guerilla fighters put the US through a much more difficult and bloody conflict than the relatively easy Spanish-American War. Still, the Filipino's never had much chance against the superior force of the Americans. On March 23, 1901, the US finally put down the Filipino revolt by capturing Aguinaldo. After being forced to take an oath of loyalty and receiving a pension from the US government, Aguinaldo retired, and never led further revolutions. The founders of the United States, who fought a revolution to end its own status as a colony of Britain, probably never expected that a little more than a century later the United States would take colonies of its own. From this perspective, America's imperialism during and after the Spanish-American War is quite a shock, which some have called the "Great Aberration." It is therefore not surprising that a strong resistance movement, the Anti-Imperialists, would rise up. However, from another perspective, American imperialism in 1898 was not a sudden abandonment of anti-colonial tradition, but a was logical extension of commercial expansion, something the US had been doing throughout its history. The claim that the year 1898 was an aberration in American history are undermined by the facts. Today, the biggest colonialist of recent history, Great Britain, has relinquished its last colony, Hong Kong. Meanwhile, America still possesses the protectorates of Guam and Puerto Rico, and still has naval bases in Cuba and the Philippines. In this sense, the imperialist effects of the Spanish-American War remain alive even in the present. The Anti-Imperialist argument was as follows. Since the Filipinos wanted freedom, annexing their homeland violated the basic American principle that just government derived from the "consent of the governed." Second, and perhaps more practically, the Anti-Imperialists felt that American territory in the Philippines would make it likely that events in Asia would involve the US in more conflicts and more wars.
Chapter 2 : Introducing Variables Before we go any further, let's just sort out one or two things. You've probably grown a little tired of typing in the word PRINT over and over again and wished there was a shorter way of doing it. Fortunately there is! Both Basic and the command line interpreter recognise minimum abbreviations. This means that you only have to type in enough of a keyword for it to be distinct from any other keyword, then follow it with a full stop (.). If the abbreviation is short for more than one keyword, Basic replaces it with the one which is first on its list, which is usually the most-used one. The minimum abbreviation for PRINT is P. which certainly saves a lot of typing! Similarly, the minimum abbreviation for MODE is MO. The command '*Cat' which we used first of all to catalogue the disc has the shortest abbreviation of all: All the listings in this guide will be shown with keywords in full, to make their meanings clear. By all means use minimum abbreviations - they speed up entering your program, as well as saving your fingers! When you're using the command line (but not the task window), another technique that can mean less typing is using the cursor edit keys. If you are typing in a line which is similar to one already on the screen, you can use the arrow keys to allow you to copy part of what you already have. Suppose, for example, you were entering our earlier program: 10PRINT "HELLO" 20PRINT "GOODBYE" After entering the first line, just type '20' then press the up-arrow key. The cursor will appear to move up one line, leaving behind a solid block. What has happened is that you now have two cursers. The block is in fact now the write cursor and the underline character the read cursor. Use the arrow keys to position the read cursor underneath the first of the characters you wish to copy, in this case the start of the word PRINT, and press the Copy key. The character above it will be copied to the position of the write cursor and both cursers will move one position to the right, allowing you to copy the next character. In this way, you can easily copy a chunk of your listing without having to type it in again. What are Variables? The previous section contained a brief reference to variables. A variable is a number which is referred to by a name. We give it a value when we first create it, and we may change it during the program. Enter Basic and try typing: We have created a variable called simply 'x' and given it a value of 3. Now try another one: Basic works out the value of the right-hand side of the equation, that is the part following the equals sign, and makes this the value of a new variable, called 'y'. Because the value of x is 3, it doesn't take a mathematical genius to see that the value of y must be 5, i.e. 3+2. You can prove this by typing: Basic always works out the value of whatever follows the word PRINT before printing it, unless it is in quotes. For example: This is why the words HELLO etc. which we printed in the previous section had to be in quotes. Without them, Basic would have thought that HELLO was the name of a variable and given you an error message saying 'Unknown or missing variable'. Choosing Names for VariablesThis is because a variable name doesn't have to be a single letter. In fact it is best if it's a word, chosen to describe what the variable does. Any group of letters and numbers can be used. Spaces are not allowed, but you can use an underline symbol (_) instead. The name must start with a letter and must not begin with a Basic keyword. You couldn't, for example, use TOTAL as a variable name, because 'TO' is a keyword. You can however, use 'total'. This example shows the best way of avoiding the problem. All Basic keywords are in capitals. If you always put variable names in lower case, there will be no chance of one clashing with a keyword. Another advantage is that it makes the program a lot easier to read, as all keywords are in capitals and all variable names are lower case. Types of VariableThere are three types of variable: floating point, integer and string. Floating point variables are used to represent numbers that may contain fractions, for example: Note by the way that a full stop is used as a decimal point in Basic, as it is elsewhere on the computer. Integer variables are used to represent whole numbers. An integer variable has a name ending in a '%' sign, for example: If you try to put a fraction into an integer variable, it will be rounded down to the nearest whole number. There are three advantages of using integer variables: It is always best to use integer variables if you can. Only use a floating point variable if its value is likely to contain a fraction, or be outside the limits -2147483648 to 2147483647, which integer variables can't handle. Automatic Line NumberingWe'll look at string variables shortly, but first let's try a very simple program working with numbers. Enter Basic and type: This will put the line numbers on the screen to save you having to type them at the beginning of each line. If you don't add any figures after AUTO, the line numbers will increase by 10 each time you press Return. Using this facility, type in the following: 10PRINT "What is the first number?" 20INPUT first% 30PRINT "What is the second number?" 40INPUT second% 50PRINT first%+second% 60GOTO 10 When you have finished, press Esc to get out of the AUTO facility. If you make a mistake and spot it before pressing Return, you can use the backspace key to delete back to it and try again. If you don't notice it until after pressing Return, you'll have to retype the line. You can restart automatic line numbering at any point you like. If you had stopped, for example, after line 30, just type: to continue. If you want the line numbers to increase in steps other than 10, for example 5, type: Using INPUTThe first line of our program prints a message asking you for a number. Line 20 introduces the basic keyword INPUT. This makes the program wait while you type in a number, then makes it the value of the variable called first% when you press Return. Lines 30 and 40 repeat the procedure so that you can enter a second number, making it the value of second%, then line 50 adds them together and prints the result. The command GOTO in line 60 is all one word, without a space in the middle. It tells the program to jump back to line 10 and continue from there. We've used the GOTO command here to avoid introducing too many new concepts in one go. It's not a good command to use, for reasons which we'll find out later, and this is the only place in this guide where it will be used. When you run the program, it will ask you for a number, wait while you type it in, then ask you for the second one. When you have entered that one, it will print the sum of the two and ask you for a new first number - it has jumped back to the beginning and started again. The best way out of this program is to press Esc. Errors Involving Variable NamesLines 20 and 40 in the program we've just looked at create the two variables first% and second% by giving them values. They don't exist until these two lines are executed. Line 50 uses them but they have to exist already for this line to work. If for any reason one of them didn't, you would get an error message saying: Unknown or missing variable at line 50 Suppose you had made a mistake typing in line 20, so that it read: You would still get the same error message saying that there was a mistake in line 50, although there would be nothing wrong with line 50. The trouble is that a variable has been created in line 20 as intended, but it's been called 'frist%' instead of 'first%' (Basic doesn't know you've made a spelling mistake!). When the program gets to line 50, it looks for a variable called 'first%' and, of course, it can't find it. The number in an error message tells you the line where the error was detected, which is not necessarily where it occurred. String VariablesThe third type of variable is a string variable. This type doesn't represent a number, but a string of characters. Its name ends in a dollar ($) sign. An example would be: String variables can be concatenated or added together, which means that the strings of characters themselves are joined together. Try this: first_name$="Fred" last_name$="Smith" full_name$=first_name$+last_name$ PRINT full_name$ You should get: Of course, a space between the first and last names would be nice. You could get one by modifying the third line to read: There is a space between the quotes. Using Parts of StringsAs well as joining strings together, you can obtain parts of them, using the keywords LEFT$, MID$ and RIGHT$. LEFT$ lets you take some characters from the left-hand end of the string. It is followed by the name of the string variable and the number of characters in brackets, for example: word$="ABCDE" PRINT LEFT$(word$,3) ABC If you omit the number of characters, the result is the original string minus its last character: words$="ABCDE" PRINT LEFT$(word$) ABCD RIGHT$ does the same thing at the right-hand end of the string: name$="ABCDE" PRINT RIGHT$(name$,3) CDE In this case, if you omit the number of characters, you get the last character of the string: name$="ABCDE" PRINT RIGHT$(name$) E MID$ allows you to take one or more characters from anywhere in the string. Like LEFT$ and RIGHT$, it is followed by the name of the string in brackets, but there are now two numbers, telling us the position in the string of the first character and the number of characters that we want, for example: name$="ABCDE" PRINT MID$(name$,2,3) BCD The first of these two numbers is always needed but if you omit the second one, you get all the characters to the end of the string: name$="ABCDE" PRINT MID$(name$,2) BCDE In all cases, you can use a variable in place of a number. LEFT$, RIGHT$ and MID$ can also be used to replace part of a string, for example: The first three characters of a$ are replaced by the first three characters of b$, or all of b$, if it is shorter than this. Similarly: replaces the last three characters with the first three of b$ and you can do a similar operation using MID$ (try it). None of these operations alters the length of a$. You can find out the length of a string, that is the number of characters in it, by using the keyword LEN, for example: word$="ABCDE" PRINT LENword$ 5
Microsoft Excel files are organized into a set of worksheets, each containing its own set of data. By default, a file has three worksheets, but you can merge the worksheets from one XLS file into another. The process is completed using the worksheet tabs at the bottom of the XLS file. Start Microsoft Excel and open the existing sheet into which you want to merge information (call it file1.xls) from another file. Open the other file (file2.xls) from which you want to merge information. Both files must be open at the same time. Right-click the first tab at the bottom of file2.xls (which contains the information you want to merge into file1.xls). Select "Move or Copy..." from the list of options. The program loads a new dialog box. Choose the name of the other file (file1.xls in this example) from the "Move Selected Sheets to Book" drop-down box. Click "(move to end)" from the "Before Sheet" box. Click "OK" to merge the worksheet from file2.xls to file1.xls. Select the next worksheet in the open file2.xls document. Repeat this process (steps two to four) to merge all worksheets into the first file (file1.xls).
Warp Layer Layer Somes of the screenshots needs to be updated with 0.64.0 About Warp Layers The warp layer is a simple but powerful distortion layer. In a few words it takes a rectangular portion of the resulting render of the layers that are behind it and maps the four corners of the rectangle to four arbitrary points in the plane. It is a 2D -> 2D transformation. To keep the object in place when apply the perspective, you need to set the corners of the perspective destination properly around the object. Notice that the source rectangle must be centered on the object, to achieve a good effect. If source == destination then object is not warped. Parameters of Warp Layers The parameters of the warp layers are: The Top Left corner (vertex) of the source that is going to be mapped. The Bottom Right corner (vertex) of the source that is going to be mapped. Combined with Source TL it defined the "Source rectangle". The Top Left corner (vertex) of the destination where the source is going to be mapped. The Top Right corner (vertex) of the destination where the source is going to be mapped. The Bottom Left corner (vertex) of the destination where the source is going to be mapped. The Bottom Right corner (vertex) of the destination where the source is going to be mapped. When checked (boolean) it only maps the pixels which lie inside the "Source rectangle". For infinite layers (gradients, checkerboard, etc.) it define (Real) where to stop rendering the points of the vanishing point. - From 0.0 to 1.0 it renders all the points that are backwards on the perspective (in opposite direction to the vanishing point). - From 1.0 to +infinite it renders the points that go in the same direction of the vanishing point. High values of Horizon make Synfig spend a lot of time rendering and the result is slightly visibly better. Here are a few sample images of the result of applying the warp layer over a checkerboard layer. The handles of the warp layer. The dotted lines represent the Source rectangle. Notice how it corresponds to a 4 by 4 section of the checkerboard: The resulting distortion (horizon = 15.0 and Clip = off). The destination rectangle can be seen to contain the same 4 by 4 section of the checkerboard that the source rectangle contains: You can see that the rendered horizon is the result of connecting the two vanishing points of the perspective distortion. In this case there are two vanishing points given by the intersection of the lines that connects the following points: - Intersection of the line that passes by Destination TL point and Destination TR point with the line that passes by Destination BL point and Destination BR point. It gives vanishing point V1 (not shown in the diagram because it is outside the visible area, far off to the left). - Intersection of the line that passes by Destination TL point and Destination BL point with the line that passes by Destination TR point and Destination BR point. It gives vanishing point V2 Connecting the Vanishing points V1 and V2 gives the horizon line. See the diagram: To understand how the Horizon parameter works see this animated image. It moves the Horizon parameter from 0.0 to 30 in four seconds. Notice that from horizon = 0.0 to 1.0 it renders from minus infinite to the current observer position (it is made very fast in the animation, and takes about in 4/30 seconds and most of the time the rendered section is outside the visible area of the canvas - you can see it if you place a zoom out layer on top of them). This means that the portion of plane that is behind the observer point of view is rendered deformed and then very enlarged). Notice too that from horizon = 1.0 to 30.0 it renders the rest of the visible plane. As the horizon parameter gets larger, the final visible horizon gets further away. The Clip parameter This image shows what you will get when check it on. Only the pixels from inside the Source rectangle are mapped. In this case, the 4 by 4 section of the checkerboard. Turning warp on/off Simply set the destination handles to the same position as the source handles.