content
stringlengths 275
370k
|
---|
1. The problem statement, all variables and given/known data In the javelin throw at a track-and-field event, the javelin is launched at a speed of 23 m/s at an angle of 31° above the horizontal. As the javelin travels upward, its velocity points above the horizontal at an angle that decreases as time passes. How much time is required for the angle to be reduced from 31° at launch to 21°? 2. Relevant equations velocity & time: v = v0 + a * t 3. The attempt at a solution First I found the horizontal and vertical velocities when the angle is 31°: Vx1 = cos(31) * 23 = 19.7 Vy1 = sin(31) * 23 = 11.8 Then I found the horizontal and vertical velocities when the angle is 21°: Vx2 = Vx1 = cos(31) * 23 = 19.7 Vy2 = Vy1 + (-9.8) * t Lastly, in order to find t: tan(21) = Vy2 / Vx2 = ( 11.8 - 9.8 * t ) / 19.7 t ~= .44 seconds Apparently, this is not the right answer Please let me know what I am doing wrong. Thank you ! |
A tool that plots individual tumors as dots that cluster together, forming a topology of genomic similarity. Researchers can make inferences about a particular tumor based on where it sits on the map.
The goal is to learn how a tumor differs from the rest of a person on a genomic level and what it has in common with other people’s tumors. Reaching this goal usually starts with a surgeon removing a tumor. A scientist who specializes in preparing tissue for sequencing isolates the DNA and RNA (these two different kinds of molecules are both made up of strings of nucleotides), cuts the nucleotide molecules into shorter pieces, and triggers a series of chemical reactions to prepare the molecules for the sequencer. Next, a technical expert uses a sequencer (which can be the size of a photo booth and cost as much as 10 Tesla Model S cars) to determine the sequences of the short pieces of DNA and generate files containing the sequence of nucleotides represented as A, T, C and G. These steps all require extreme care, because often there isn’t much of the tumor to work with, so a mistake can end the entire process. The sequencing step itself is also very expensive, so even when there is enough to start over with, mistakes are economically costly.
The contents of the file that comes from sequencing the RNA often has more than a billion letters (nucleotides) in it, split into groups of 100-200 letters. If you downloaded the file to your phone, it could take up as much space as a movie. The file that comes from the DNA often has more than 90 billion letters and could barely fit on a large USB thumb drive. These files are next sent to scientists who specialize in analyzing sequence. Using the data from sequencing DNA, we want to find out how the genome sequence of an individual tumor compares to the sequences from thousands of other tumors.
Analysis of Patient TH_005
in the context of thousands of tumors
“The DNA and RNA tell us different things (changes to letters on the one hand, and how often genes are activated on the other), but the first step in getting answers from files is the same: Take each set of 100-200 letters and find the most likely source position among the 3.2 billion letters in the human genome reference. This is not unlike taking your copy of the complete works of Shakespeare (your genome), shredding it (sequencing it), and trying to match the shreds up to the library’s copy of the book (the human reference genome). Most parts are the same, but often there are small or large differences between the shreds and the library’s book, similar to what you’d find if it was a different edition.
This step, matching the shreds to the expected sequence for a human, is called mapping. This step and the next ones are performed by bioinformaticians. Generally, bioinformaticians are people who specialize in handling biological information. In this case, they specialize in interpreting data generated by the parallel processes of genomic sequencing and rely heavily on computers.
By analyzing a single tumor, we can find out how that tumor’s genomic information compares to thousands of tumors.
The first step is to look over all the shreds that were matched to the library book and see where they differ. Sometimes it’s a one nucleotide change, the equivalent of a spelling difference. A change in spelling may change the meaning (bit vs bat) or not (color vs colour). Similarly, changes to DNA can have a major effect, a minor effect, or no observable effect. The changes can be on a bigger scale, like missing or added words, or missing or duplicated chapters. Even among those larger scale changes, some have large effects and others don’t. The bioinformatician makes a list of the changes predicted to be important and indicates which ones have a known meaning or (better) are known to respond to a certain treatment. A cancer biologist then considers this information in the context of everything they know about various kinds of cancer.
The data from sequencing RNA gives us information about how genes are activated or expressed (i.e. used by the cell to make proteins).
Unlike one person comparing shreds of their book to the library’s copy,
…this analysis is more similar to a group of people who each photocopy their favorite parts of Shakespeare’s collected works from the library. If they all put the photocopied pages in the same shredder after they read them, after we matched the shreds back to the original book, we could tell which plays or sonnets were most popular. The equivalent biological information can tell us what genes are activated in a cancer, and we can compare that to data from other individuals with cancer. The combination of levels of gene activation gives us a profile, and can help us see whether a cancer is similar to what is expected, or is different on a molecular level. Again, bioinformaticians and cancer biologists work together to identify any ramifications this comparison might have on treating the person’s cancer.
The MedBook vision is the offer of easy-to-use interfaces…
MedBook is an application platform for user- friendly bioinformatic tools. We live in a world where important genomic research is being done but applying that research in a meaningful way is challenging. For many doctors and researchers outside of the genomics field, genomic research remains out of reach.
The MedBook vision is the offer of easy-to-use interfaces for peer-reviewed, industry-standard analyses on patient data. Doctors and researchers without prior knowledge of bioinformatic tools are able to load private genomic and clinical data and use MedBook to analyze the information. |
The English Language Arts (ELA) Common Core Standards are based on the College and Career Readiness standards, the predecessor to the new Common Core Standards. Laid out in a grade-specific fashion, ELA standards define end-of-year expectations and a cumulative progression that enables students to meet college and career readiness expectations by the end of high school. As students advance through grades K-12, they are expected to meet each year’s ELA Common Core Standards by further developing skills and understandings mastered in preceding grades.
The Common Core Standards for English Language Arts and Literacy
Similar to the Common Core Standards for Mathematics, the English Language Arts Common Core Standards do not dictate a specific curriculum. Instead, standards for grade levels K-8 provide specific learning achievement expectations for students at each grade level. In the later grades ELA Standards allow for more flexibility by using two-year bands of achievement goals. In other words, grades 9-10 and 11-12 share ELA achievement Common Core Standards.
Under each grade level, the new Common Core Standards for English Language Arts break standards into four strands:
Reading– The Standards ensure that students are ready for the demands of college and career level reading through the use of literary and informational texts. These ELA Reading Standards focus on text complexity to ensure students can read at multiple levels. The expectation is that students will learn to build knowledge, gain insights, explore possibilities, and broaden their perspective through reading. Sample texts are provided for the purposes of aiding in lesson planning, however, a reading list is not provided.
Writing- ELA Writing Standards emphasize the ability to write logical arguments based on well-research evidence and sound reasoning. The Standards ensure opinion writing extends down to even the earliest grades. Assessments require students to differentiate between types of writing at all grade levels.
Speaking and Listening- The Standards will ensure all students master speaking and listening skills in a variety of real world settings. Students will gain, evaluate, and present increasingly complex information, ideas, and evidence through listening and speaking in one-on-one, small-group, and whole-class settings. That ELA Speaking and Listening Standards will focus on formal presentations and informal discussions.
Language- The Standards will require students to grow their vocabularies through a mix of direct instruction, conversations, and reading. In preparation for real life experiences in post-secondary education and the workforce, these ELA Language Standards will allow students to determine word meanings, appreciate the nuances of words, and steadily expand their repertoire of words and phrases.
A different set of ELA achievement standards is present for each strand by grade level. For example, third grade students have a set of ELA achievements specific to Language, another set of ELA achievements for Speaking and Listening, another for Writing and a fourth set for Reading. This integrated model of literacy allows for conceptual clarity by breaking down the processes of communication. The new Common Core Standards for ELA require that the instruction of these four strands be a shared responsibility within the school. In other words, teachers of other subjects will partner with English Language Arts teachers to make sure students achieve these goals.
Indicators of College and Career Readiness
Ultimately, the Common Core Standings for English Language Arts clarify those students who are college and career ready in Reading, Writing, Speaking, Listening and Language are able to:
- Demonstrate independence
- Build strong content knowledge
- Respond to the varying demands of audience, task, purpose, and discipline
- Comprehend and critique
- Value evidence
- Use technology and digital media strategically and capably
- Come to understand other perspectives and cultures
To read extended definitions and requirements of the Common Core Standards of English Language Arts and a full list of the grade level proficiencies, you can visit the Common Core Standards for English Language Arts.
More About the Common Core State Standards:
The Common Core State Standards: New Approach to Student Assessment»
The Common Core State Standards: An Overview»
The Common Core State Standards: Mathematics» |
How to solder the diode
The diode is a two-electrode electrical element, whose conductivity depends on the direction of electric current. Today, LEDs are widely used in electronics, they are used in home-made electrical devices. When mounting the circuit device based on diodes, it is necessary to remember some rules.
You will need:
Diode, flux for soldering aluminum, tin or solder, soldering iron, wire cutters, tweezers, sponge
Instruction how to solder the diode
Select a diode in accordance with the required parameters. Consider it to determine the polarity. Each diode has two poles - the "plus" and "minus". Long concluded device indicates a "plus" and short - on the "minus". If you are installing the circuit diode is soldered properly, nothing serious happens, it just will not work.
The board mark the place for the diode assembly. If you use ready-made card, use the standard mounting holes. If the board is homemade, drill mounting holes in a location convenient from the point of view of the rest of the layout of circuit elements. It is advisable to perform a pre-wiring diagram, which are schematically shown the site of attachment of electrical elements.
Prepare wire. They will be needed if you need to combine with other elements of the diodes. It is desirable that the wires vary in color - it's easier to determine the polarity of the connection. Wires choose no more than 0.75 mm.
Insert a diode in charge. If you mount a circuit consisting of several diodes, place them so that the long conclusion is in series on one side and short - on the other.
Secure the diode by folding the findings in hand. If diode conclusions too long bit off their cutters.
Turn the soldering iron to the network and soak the sponge with water. After heating the soldering iron cover its part of the work (the sting) with a thin layer of solder (tin) and wipe with a damp sponge to remove the remnants of the old solder. In the soldering process periodically wipe the soldering tip with a damp sponge to keep it clean.
Place the soldering tip between the legs of the diode and the board to heat the soldering. Warm up the soldering longer than two seconds are not recommended, or a diode may be damaged.
Bring to the place of soldering solder. After melting the required amount of solder take him from the place of soldering. For a second, keep the soldering iron in parts soldered to solder evenly distributed across the surface of the soldered terminals. Wait until the soldering cools. Contact us ready. |
A typical axial-lead resistor
|Working principle||Electric resistance|
Two common schematic symbols
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of electrical power as heat may be used as part of motor controls, in power distribution systems, or as test loads for generators. Fixed resistors have resistances that only change slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements (such as a volume control or a lamp dimmer), or as sensing devices for heat, light, humidity, force, or chemical activity.
Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds and forms. Resistors are also implemented within integrated circuits.
The electrical function of a resistor is specified by its resistance: common commercial resistors are manufactured over a range of more than nine orders of magnitude. The nominal value of the resistance falls within the manufacturing tolerance, indicated on the component.
- 1 Electronic symbols and notation
- 2 Theory of operation
- 3 Nonideal properties
- 4 Fixed resistor
- 5 Variable resistors
- 6 Measurement
- 7 Standards
- 8 Resistor marking
- 9 Electrical and thermal noise
- 10 Failure modes
- 11 See also
- 12 References
- 13 External links
Electronic symbols and notation
Two typical schematic diagram symbols are as follows:
IEC resistor symbol
The notation to state a resistor's value in a circuit diagram varies.
One common scheme is the letter and digit code for resistance values following IEC 60062. It avoids using a decimal separator and replaces the decimal separator with a letter loosely associated with SI prefixes corresponding with the part's resistance. For example, 8K2 as part marking code, in a circuit diagram or in a bill of materials (BOM) indicates a resistor value of 8.2 kΩ. Additional zeros imply a tighter tolerance, for example 15M0 for three significant digits. When the value can be expressed without the need for a prefix (that is, multiplicator 1), an "R" is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω.
Theory of operation
The behaviour of an ideal resistor is dictated by the relationship specified by Ohm's law:
Ohm's law states that the voltage (V) across a resistor is proportional to the current (I), where the constant of proportionality is the resistance (R). For example, if a 300 ohm resistor is attached across the terminals of a 12 volt battery, then a current of 12 / 300 = 0.04 amperes flows through that resistor.
The ohm (symbol: Ω) is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere. Since resistors are specified and manufactured over a very large range of values, the derived units of milliohm (1 mΩ = 10−3 Ω), kilohm (1 kΩ = 103 Ω), and megohm (1 MΩ = 106 Ω) are also in common usage.
Series and parallel resistors
The total resistance of resistors connected in series is the sum of their individual resistance values.
The total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors.
For example, a 10 ohm resistor connected in parallel with a 5 ohm resistor and a 15 ohm resistor produces 1/ ohms of resistance, or 30/ = 2.727 ohms.
A resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other. Some complex networks of resistors cannot be resolved in this manner, requiring more sophisticated circuit analysis. Generally, the Y-Δ transform, or matrix methods can be used to solve such problems.
At any instant, the power P (watts) consumed by a resistor of resistance R (ohms) is calculated as: where V (volts) is the voltage across the resistor and I (amps) is the current flowing through it. Using Ohm's law, the two other forms can be derived. This power is converted into heat which must be dissipated by the resistor's package before its temperature rises excessively.
Resistors are rated according to their maximum power dissipation. Discrete resistors in solid-state electronic systems are typically rated as 1/10, 1/8, or 1/4 watt. They usually absorb much less than a watt of electrical power and require little attention to their power rating.
Resistors required to dissipate substantial amounts of power, particularly used in power supplies, power conversion circuits, and power amplifiers, are generally referred to as power resistors; this designation is loosely applied to resistors with power ratings of 1 watt or greater. Power resistors are physically larger and may not use the preferred values, color codes, and external packages described below.
If the average power dissipated by a resistor is more than its power rating, damage to the resistor may occur, permanently altering its resistance; this is distinct from the reversible change in resistance due to its temperature coefficient when it warms. Excessive power dissipation may raise the temperature of the resistor to a point where it can burn the circuit board or adjacent components, or even cause a fire. There are flameproof resistors that fail (open circuit) before they overheat dangerously.
Since poor air circulation, high altitude, or high operating temperatures may occur, resistors may be specified with higher rated dissipation than is experienced in service.
All resistors have a maximum voltage rating; this may limit the power dissipation for higher resistance values.
Practical resistors have a series inductance and a small parallel capacitance; these specifications can be important in high-frequency applications. In a low-noise amplifier or pre-amp, the noise characteristics of a resistor may be an issue.
The temperature coefficient of the resistance may also be of concern in some precision applications.
The unwanted inductance, excess noise, and temperature coefficient are mainly dependent on the technology used in manufacturing the resistor. They are not normally specified individually for a particular family of resistors manufactured using a particular technology. A family of discrete resistors is also characterized according to its form factor, that is, the size of the device and the position of its leads (or terminals) which is relevant in the practical manufacturing of circuits using them.
Practical resistors are also specified as having a maximum power rating which must exceed the anticipated power dissipation of that resistor in a particular circuit: this is mainly of concern in power electronics applications. Resistors with higher power ratings are physically larger and may require heat sinks. In a high-voltage circuit, attention must sometimes be paid to the rated maximum working voltage of the resistor. While there is no minimum working voltage for a given resistor, failure to account for a resistor's maximum rating may cause the resistor to incinerate when current is run through it.
Through-hole components typically have "leads" (pronounced //) leaving the body "axially," that is, on a line parallel with the part's longest axis. Others have leads coming off their body "radially" instead. Other components may be SMT (surface mount technology), while high power resistors may have one of their leads designed into the heat sink.
Carbon composition resistors (CCR) consist of a solid cylindrical resistive element with embedded wire leads or metal end caps to which the lead wires are attached. The body of the resistor is protected with paint or plastic. Early 20th-century carbon composition resistors had uninsulated bodies; the lead wires were wrapped around the ends of the resistance element rod and soldered. The completed resistor was painted for color-coding of its value.
The resistive element is made from a mixture of finely powdered carbon and an insulating material, usually ceramic. A resin holds the mixture together. The resistance is determined by the ratio of the fill material (the powdered ceramic) to the carbon. Higher concentrations of carbon, which is a good conductor, result in lower resistance. Carbon composition resistors were commonly used in the 1960s and earlier, but are not popular for general use now as other types have better specifications, such as tolerance, voltage dependence, and stress. Carbon composition resistors change value when stressed with over-voltages. Moreover, if internal moisture content, from exposure for some length of time to a humid environment, is significant, soldering heat creates a non-reversible change in resistance value. Carbon composition resistors have poor stability with time and were consequently factory sorted to, at best, only 5% tolerance. These resistors are non-inductive that provide benefit when used in voltage pulse reduction and surge protection applications. These resistors, however, if never subjected to overvoltage nor overheating were remarkably reliable considering the component's size.
Carbon composition resistors are still available, but comparatively quite costly. Values ranged from fractions of an ohm to 22 megohms. Due to their high price, these resistors are no longer used in most applications. However, they are used in power supplies and welding controls.
A carbon pile resistor is made of a stack of carbon disks compressed between two metal contact plates. Adjusting the clamping pressure changes the resistance between the plates. These resistors are used when an adjustable load is required, for example in testing automotive batteries or radio transmitters. A carbon pile resistor can also be used as a speed control for small motors in household appliances (sewing machines, hand-held mixers) with ratings up to a few hundred watts. A carbon pile resistor can be incorporated in automatic voltage regulators for generators, where the carbon pile controls the field current to maintain relatively constant voltage. The principle is also applied in the carbon microphone.
A carbon film is deposited on an insulating substrate, and a helix is cut in it to create a long, narrow resistive path. Varying shapes, coupled with the resistivity of amorphous carbon (ranging from 500 to 800 μΩ m), can provide a wide range of resistance values. Compared to carbon composition they feature low noise, because of the precise distribution of the pure graphite without binding. Carbon film resistors feature a power rating range of 0.125 W to 5 W at 70 °C. Resistances available range from 1 ohm to 10 megohm. The carbon film resistor has an operating temperature range of −55 °C to 155 °C. It has 200 to 600 volts maximum working voltage range. Special carbon film resistors are used in applications requiring high pulse stability.
Printed carbon resistor
Carbon composition resistors can be printed directly onto printed circuit board (PCB) substrates as part of the PCB manufacturing process. Although this technique is more common on hybrid PCB modules, it can also be used on standard fibreglass PCBs. Tolerances are typically quite large, and can be in the order of 30%. A typical application would be non-critical pull-up resistors.
Thick and thin film
Thick film resistors became popular during the 1970s, and most SMD (surface mount device) resistors today are of this type. The resistive element of thick films is 1000 times thicker than thin films, but the principal difference is how the film is applied to the cylinder (axial resistors) or the surface (SMD resistors).
Thin film resistors are made by sputtering (a method of vacuum deposition) the resistive material onto an insulating substrate. The film is then etched in a similar manner to the old (subtractive) process for making printed circuit boards; that is, the surface is coated with a photo-sensitive material, then covered by a pattern film, irradiated with ultraviolet light, and then the exposed photo-sensitive coating is developed, and underlying thin film is etched away.
Thick film resistors are manufactured using screen and stencil printing processes.
Because the time during which the sputtering is performed can be controlled, the thickness of the thin film can be accurately controlled. The type of material is also usually different consisting of one or more ceramic (cermet) conductors such as tantalum nitride (TaN), ruthenium oxide (RuO
2), lead oxide (PbO), bismuth ruthenate (Bi
7), nickel chromium (NiCr), or bismuth iridate (Bi
The resistance of both thin and thick film resistors after manufacture is not highly accurate; they are usually trimmed to an accurate value by abrasive or laser trimming. Thin film resistors are usually specified with tolerances of 0.1, 0.2, 0.5, or 1%, and with temperature coefficients of 5 to 25 ppm/K. They also have much lower noise levels, on the level of 10–100 times less than thick film resistors.
Thick film resistors may use the same conductive ceramics, but they are mixed with sintered (powdered) glass and a carrier liquid so that the composite can be screen-printed. This composite of glass and conductive ceramic (cermet) material is then fused (baked) in an oven at about 850 °C.
Thick film resistors, when first manufactured, had tolerances of 5%, but standard tolerances have improved to 2% or 1% in the last few decades. Temperature coefficients of thick film resistors are high, typically ±200 or ±250 ppm/K; a 40 kelvin (70 °F) temperature change can change the resistance by 1%.
Thin film resistors are usually far more expensive than thick film resistors. For example, SMD thin film resistors, with 0.5% tolerances, and with 25 ppm/K temperature coefficients, when bought in full size reel quantities, are about twice the cost of 1%, 250 ppm/K thick film resistors.
A common type of axial-leaded resistor today is the metal-film resistor. Metal Electrode Leadless Face (MELF) resistors often use the same technology, and are also cylindrically shaped but are designed for surface mounting. Note that other types of resistors (e.g., carbon composition) are also available in MELF packages.
Metal film resistors are usually coated with nickel chromium (NiCr), but might be coated with any of the cermet materials listed above for thin film resistors. Unlike thin film resistors, the material may be applied using different techniques than sputtering (though this is one of the techniques). Also, unlike thin-film resistors, the resistance value is determined by cutting a helix through the coating rather than by etching. (This is similar to the way carbon resistors are made.) The result is a reasonable tolerance (0.5%, 1%, or 2%) and a temperature coefficient that is generally between 50 and 100 ppm/K. Metal film resistors possess good noise characteristics and low non-linearity due to a low voltage coefficient. Also beneficial are their tight tolerance, low temperature coefficient and long-term stability.
Metal oxide film
Metal-oxide film resistors are made of metal oxides which results in a higher operating temperature and greater stability/reliability than Metal film. They are used in applications with high endurance demands.
Wirewound resistors are commonly made by winding a metal wire, usually nichrome, around a ceramic, plastic, or fiberglass core. The ends of the wire are soldered or welded to two caps or rings, attached to the ends of the core. The assembly is protected with a layer of paint, molded plastic, or an enamel coating baked at high temperature. These resistors are designed to withstand unusually high temperatures of up to 450 °C. Wire leads in low power wirewound resistors are usually between 0.6 and 0.8 mm in diameter and tinned for ease of soldering. For higher power wirewound resistors, either a ceramic outer case or an aluminum outer case on top of an insulating layer is used – if the outer case is ceramic, such resistors are sometimes described as "cement" resistors, though they do not actually contain any traditional cement. The aluminum-cased types are designed to be attached to a heat sink to dissipate the heat; the rated power is dependent on being used with a suitable heat sink, e.g., a 50 W power rated resistor overheats at a fraction of the power dissipation if not used with a heat sink. Large wirewound resistors may be rated for 1,000 watts or more.
Because wirewound resistors are coils they have more undesirable inductance than other types of resistor, although winding the wire in sections with alternately reversed direction can minimize inductance. Other techniques employ bifilar winding, or a flat thin former (to reduce cross-section area of the coil). For the most demanding circuits, resistors with Ayrton-Perry winding are used.
Applications of wirewound resistors are similar to those of composition resistors with the exception of the high frequency. The high frequency response of wirewound resistors is substantially worse than that of a composition resistor.
The primary resistance element of a foil resistor is a special alloy foil several micrometers thick. Since their introduction in the 1960s, foil resistors have had the best precision and stability of any resistor available. One of the important parameters influencing stability is the temperature coefficient of resistance (TCR). The TCR of foil resistors is extremely low, and has been further improved over the years. One range of ultra-precision foil resistors offers a TCR of 0.14 ppm/°C, tolerance ±0.005%, long-term stability (1 year) 25 ppm, (3 years) 50 ppm (further improved 5-fold by hermetic sealing), stability under load (2000 hours) 0.03%, thermal EMF 0.1 μV/°C, noise −42 dB, voltage coefficient 0.1 ppm/V, inductance 0.08 μH, capacitance 0.5 pF.
An ammeter shunt is a special type of current-sensing resistor, having four terminals and a value in milliohms or even micro-ohms. Current-measuring instruments, by themselves, can usually accept only limited currents. To measure high currents, the current passes through the shunt across which the voltage drop is measured and interpreted as current. A typical shunt consists of two solid metal blocks, sometimes brass, mounted on an insulating base. Between the blocks, and soldered or brazed to them, are one or more strips of low temperature coefficient of resistance (TCR) manganin alloy. Large bolts threaded into the blocks make the current connections, while much smaller screws provide volt meter connections. Shunts are rated by full-scale current, and often have a voltage drop of 50 mV at rated current. Such meters are adapted to the shunt full current rating by using an appropriately marked dial face; no change need to be made to the other parts of the meter.
In heavy-duty industrial high-current applications, a grid resistor is a large convection-cooled lattice of stamped metal alloy strips connected in rows between two electrodes. Such industrial grade resistors can be as large as a refrigerator; some designs can handle over 500 amperes of current, with a range of resistances extending lower than 0.04 ohms. They are used in applications such as dynamic braking and load banking for locomotives and trams, neutral grounding for industrial AC distribution, control loads for cranes and heavy equipment, load testing of generators and harmonic filtering for electric substations.
A resistor may have one or more fixed tapping points so that the resistance can be changed by moving the connecting wires to different terminals. Some wirewound power resistors have a tapping point that can slide along the resistance element, allowing a larger or smaller part of the resistance to be used.
Where continuous adjustment of the resistance value during operation of equipment is required, the sliding resistance tap can be connected to a knob accessible to an operator. Such a device is called a rheostat and has two terminals.
A potentiometer or pot is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob or by a linear slider. It is called a potentiometer because it can be connected as an adjustable voltage divider to provide a variable potential at the terminal connected to the tapping point. A volume control for an audio device is a common use of a potentiometer. A typical low power potentiometer (see drawing) is constructed of a flat resistance element (B) of carbon composition, metal film, or conductive plastic, with a springy phosphor bronze wiper contact (C) which moves along the surface. An alternate construction is resistance wire wound on a form, with the wiper sliding axially along the coil. These have lower resolution, since as the wiper moves the resistance changes in steps equal to the resistance of a single turn.
High-resolution multiturn potentiometers are used in a few precision applications. These have wirewound resistance elements typically wound on a helical mandrel, with the wiper moving on a helical track as the control is turned, making continuous contact with the wire. Some include a conductive-plastic resistance coating over the wire to improve resolution. These typically offer ten turns of their shafts to cover their full range. They are usually set with dials that include a simple turns counter and a graduated dial, and can typically achieve three digit resolution. Electronic analog computers used them in quantity for setting coefficients, and delayed-sweep oscilloscopes of recent decades included one on their panels.
Resistance decade boxes
A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in. Usually the resistance is accurate to high precision, ranging from laboratory/calibration grade accuracy of 20 parts per million, to field grade at 1%. Inexpensive boxes with lesser accuracy are also available. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. The range of resistance provided, the maximum resolution, and the accuracy characterize the box. For example, one box offers resistances from 0 to 100 megohms, maximum resolution 0.1 ohm, accuracy 0.1%.
There are various devices whose resistance changes with various quantities. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. Since their resistance can be large until they are allowed to heat up due to the passage of current, they are also commonly used to prevent excessive current surges when equipment is powered on. Similarly, the resistance of a humistor varies with humidity. One sort of photodetector, the photoresistor, has a resistance which varies with illumination.
The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. A single resistor may be used, or a pair (half bridge), or four resistors connected in a Wheatstone bridge configuration. The strain resistor is bonded with adhesive to an object that is subjected to mechanical strain. With the strain gauge and a filter, amplifier, and analog/digital converter, the strain on an object can be measured.
A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. It passes a current whose magnitude can vary by a factor of 1012 in response to changes in applied pressure.
The value of a resistor can be measured with an ohmmeter, which may be one function of a multimeter. Usually, probes on the ends of test leads connect to the resistor. A simple ohmmeter may apply a voltage from a battery across the unknown resistor (with an internal resistor of a known value in series) producing a current which drives a meter movement. The current, in accordance with Ohm's law, is inversely proportional to the sum of the internal resistance and the resistor being tested, resulting in an analog meter scale which is very non-linear, calibrated from infinity to 0 ohms. A digital multimeter, using active electronics, may instead pass a specified current through the test resistance. The voltage generated across the test resistance in that case is linearly proportional to its resistance, which is measured and displayed. In either case the low-resistance ranges of the meter pass much more current through the test leads than do high-resistance ranges, in order for the voltages present to be at reasonable levels (generally below 10 volts) but still measurable.
Measuring low-value resistors, such as fractional-ohm resistors, with acceptable accuracy requires four-terminal connections. One pair of terminals applies a known, calibrated current to the resistor, while the other pair senses the voltage drop across the resistor. Some laboratory quality ohmmeters, especially milliohmmeters, and even some of the better digital multimeters sense using four input terminals for this purpose, which may be used with special test leads. Each of the two so-called Kelvin clips has a pair of jaws insulated from each other. One side of each clip applies the measuring current, while the other connections are only to sense the voltage drop. The resistance is again calculated using Ohm's Law as the measured voltage divided by the applied current.
Resistor characteristics are quantified and reported using various national standards. In the US, MIL-STD-202 contains the relevant test methods to which other standards refer.
There are various standards specifying properties of resistors for use in equipment:
- IEC 60062 (IEC 62) / DIN 40825 / BS 1852 / IS 8186 / JIS C 5062 etc. (Resistor color code, letter and digit code, date code)
- EIA RS-279 / DIN 41429 (Resistor color code)
- IEC 60063 (IEC 63) / JIS C 5063 (Standard E series values)
- MIL-PRF-39007 (Fixed power, established reliability)
- MIL-PRF-55342 (Surface-mount thick and thin film)
- MIL-R-11 STANDARD CANCELED
- MIL-R-39017 (Fixed, General Purpose, Established Reliability)
- MIL-PRF-32159 (zero ohm jumpers)
- UL 1412 (fusing and temperature limited resistors)
There are other United States military procurement MIL-R- standards.
The primary standard for resistance, the "mercury ohm" was initially defined in 1884 in as a column of mercury 106.3 cm long and 1 square millimeter in cross-section, at 0 degrees Celsius. Difficulties in precisely measuring the physical constants to replicate this standard result in variations of as much as 30 ppm. From 1900 the mercury ohm was replaced with a precision machined plate of manganin. Since 1990 the international resistance standard has been based on the quantized Hall effect discovered by Klaus von Klitzing, for which he won the Nobel Prize in Physics in 1985.
Resistors of extremely high precision are manufactured for calibration and laboratory use. They may have four terminals, using one pair to carry an operating current and the other pair to measure the voltage drop; this eliminates errors caused by voltage drops across the lead resistances, because no charge flows through voltage sensing leads. It is important in small value resistors (100–0.0001 ohm) where lead resistance is significant or even comparable with respect to resistance standard value.
Most axial resistors use a pattern of colored stripes to indicate resistance, which also indicate tolerance, and may also be extended to show temperature coefficient and reliability class. Cases are usually tan, brown, blue, or green, though other colors are occasionally found such as dark red or dark gray. The power rating is not usually marked and is deduced from the size.
The color bands of the carbon resistors can be three, four, five or, six bands. The first two bands represent first two digits to measure their value in ohms. The third band of a three- or four-banded resistor represents multiplier; a fourth band denotes tolerance (which if absent, denotes ±20%). For five and six color-banded resistors, the third band is a third digit, fourth band multiplier and fifth is tolerance. The sixth band represents temperature co-efficient in a six-banded resistor.
Surface-mount resistors are marked numerically, if they are big enough to permit marking; more-recent small sizes are impractical to mark.
Early 20th century resistors, essentially uninsulated, were dipped in paint to cover their entire body for color-coding. A second color of paint was applied to one end of the element, and a color dot (or band) in the middle provided the third digit. The rule was "body, tip, dot", providing two significant digits for value and the decimal multiplier, in that sequence. Default tolerance was ±20%. Closer-tolerance resistors had silver (±10%) or gold-colored (±5%) paint on the other end.
Early resistors were made in more or less arbitrary round numbers; a series might have 100, 125, 150, 200, 300, etc. Resistors as manufactured are subject to a certain percentage tolerance, and it makes sense to manufacture values that correlate with the tolerance, so that the actual value of a resistor overlaps slightly with its neighbors. Wider spacing leaves gaps; narrower spacing increases manufacturing and inventory costs to provide resistors that are more or less interchangeable.
A logical scheme is to produce resistors in a range of values which increase in a geometric progression, so that each value is greater than its predecessor by a fixed multiplier or percentage, chosen to match the tolerance of the range. For example, for a tolerance of ±20% it makes sense to have each resistor about 1.5 times its predecessor, covering a decade in 6 values. In practice the factor used is 1.4678, giving values of 1.47, 2.15, 3.16, 4.64, 6.81, 10 for the 1–10-decade (a decade is a range increasing by a factor of 10; 0.1–1 and 10–100 are other examples); these are rounded in practice to 1.5, 2.2, 3.3, 4.7, 6.8, 10; followed, by 15, 22, 33, … and preceded by … 0.47, 0.68, 1. This scheme has been adopted as the E6 series of the IEC 60063 preferred number values. There are also E12, E24, E48, E96 and E192 series for components of progressively finer resolution, with 12, 24, 96, and 192 different values within each decade. The actual values used are in the IEC 60063 lists of preferred numbers.
A resistor of 100 ohms ±20% would be expected to have a value between 80 and 120 ohms; its E6 neighbors are 68 (54–82) and 150 (120–180) ohms. A sensible spacing, E6 is used for ±20% components; E12 for ±10%; E24 for ±5%; E48 for ±2%, E96 for ±1%; E192 for ±0.5% or better. Resistors are manufactured in values from a few milliohms to about a gigaohm in IEC60063 ranges appropriate for their tolerance. Manufacturers may sort resistors into tolerance-classes based on measurement. Accordingly, a selection of 100 ohms resistors with a tolerance of ±10%, might not lie just around 100 ohm (but no more than 10% off) as one would expect (a bell-curve), but rather be in two groups – either between 5 and 10% too high or 5 to 10% too low (but not closer to 100 ohm than that) because any resistors the factory had measured as being less than 5% off would have been marked and sold as resistors with only ±5% tolerance or better. When designing a circuit, this may become a consideration. This process of sorting parts based on post-production measurement is known as "binning", and can be applied to other components than resistors (such as speed grades for CPUs).
Earlier power wirewound resistors, such as brown vitreous-enameled types, however, were made with a different system of preferred values, such as some of those mentioned in the first sentence of this section.
Surface mounted resistors of larger sizes (metric 1608 and above) are printed with numerical values in a code related to that used on axial resistors. Standard-tolerance surface-mount technology (SMT) resistors are marked with a three-digit code, in which the first two digits are the first two significant digits of the value and the third digit is the power of ten (the number of zeroes). For example:
|334||= 33 × 104 ohms = 330 kilohms|
|222||= 22 × 102 ohms = 2.2 kilohms|
|473||= 47 × 103 ohms = 47 kilohms|
|105||= 10 × 105 ohms = 1 megohm|
Resistances less than 100 ohms are written: 100, 220, 470. The final zero represents ten to the power zero, which is 1. For example:
|100||= 10 × 100 ohm = 10 ohms|
|220||= 22 × 100 ohm = 22 ohms|
Sometimes these values are marked as 10 or 22 to prevent a mistake.
Resistances less than 10 ohms have 'R' to indicate the position of the decimal point (radix point). For example:
|4R7||= 4.7 ohms|
|R300||= 0.30 ohms|
|0R22||= 0.22 ohms|
|0R01||= 0.01 ohms|
Precision resistors are marked with a four-digit code, in which the first three digits are the significant figures and the fourth is the power of ten. For example:
|1001||= 100 × 101 ohms = 1.00 kilohm|
|4992||= 499 × 102 ohms = 49.9 kilohm|
|1000||= 100 × 100 ohm = 100 ohms|
000 and 0000 sometimes appear as values on surface-mount zero-ohm links, since these have (approximately) zero resistance.
More recent surface-mount resistors are too small, physically, to permit practical markings to be applied.
Industrial type designation
Format: [two letters]<space>[resistance value (three digit)]<nospace>[tolerance code(numerical – one digit)]
|Industrial type designation||Tolerance||MIL Designation|
Steps to find out the resistance or capacitance values:
- First two letters gives the power dissipation capacity.
- Next three digits gives the resistance value.
- First two digits are the significant values
- Third digit is the multiplier.
- Final digit gives the tolerance.
If a resistor is coded:
- EB1041: power dissipation capacity = 1/2 watts, resistance value = 10×10^4±10% = between 9×10^4 ohms and 11×10^4 ohms.
- CB3932: power dissipation capacity = 1/4 watts, resistance value = 39×10^3±20% = between 46.8×10^3 ohms and 31.2×10^3 ohms.
Electrical and thermal noise
In amplifying faint signals, it is often necessary to minimize electronic noise, particularly in the first stage of amplification. As a dissipative element, even an ideal resistor naturally produces a randomly fluctuating voltage, or noise, across its terminals. This Johnson–Nyquist noise is a fundamental noise source which depends only upon the temperature and resistance of the resistor, and is predicted by the fluctuation–dissipation theorem. Using a larger value of resistance produces a larger voltage noise, whereas a smaller value of resistance generates more current noise, at a given temperature.
The thermal noise of a practical resistor may also be larger than the theoretical prediction and that increase is typically frequency-dependent. Excess noise of a practical resistor is observed only when current flows through it. This is specified in unit of μV/V/decade – μV of noise per volt applied across the resistor per decade of frequency. The μV/V/decade value is frequently given in dB so that a resistor with a noise index of 0 dB exhibits 1 μV (rms) of excess noise for each volt across the resistor in each frequency decade. Excess noise is thus an example of 1/f noise. Thick-film and carbon composition resistors generate more excess noise than other types at low frequencies. Wire-wound and thin-film resistors are often used for their better noise characteristics. Carbon composition resistors can exhibit a noise index of 0 dB while bulk metal foil resistors may have a noise index of −40 dB, usually making the excess noise of metal foil resistors insignificant. Thin film surface mount resistors typically have lower noise and better thermal stability than thick film surface mount resistors. Excess noise is also size-dependent: in general excess noise is reduced as the physical size of a resistor is increased (or multiple resistors are used in parallel), as the independently fluctuating resistances of smaller components tend to average out.
While not an example of "noise" per se, a resistor may act as a thermocouple, producing a small DC voltage differential across it due to the thermoelectric effect if its ends are at different temperatures. This induced DC voltage can degrade the precision of instrumentation amplifiers in particular. Such voltages appear in the junctions of the resistor leads with the circuit board and with the resistor body. Common metal film resistors show such an effect at a magnitude of about 20 µV/°C. Some carbon composition resistors can exhibit thermoelectric offsets as high as 400 µV/°C, whereas specially constructed resistors can reduce this number to 0.05 µV/°C. In applications where the thermoelectric effect may become important, care has to be taken to mount the resistors horizontally to avoid temperature gradients and to mind the air flow over the board.
The failure rate of resistors in a properly designed circuit is low compared to other electronic components such as semiconductors and electrolytic capacitors. Damage to resistors most often occurs due to overheating when the average power delivered to it greatly exceeds its ability to dissipate heat (specified by the resistor's power rating). This may be due to a fault external to the circuit, but is frequently caused by the failure of another component (such as a transistor that shorts out) in the circuit connected to the resistor. Operating a resistor too close to its power rating can limit the resistor's lifespan or cause a significant change in its resistance. A safe design generally uses overrated resistors in power applications to avoid this danger.
Low-power thin-film resistors can be damaged by long-term high-voltage stress, even below maximum specified voltage and below maximum power rating. This is often the case for the startup resistors feeding the SMPS integrated circuit.
When overheated, carbon-film resistors may decrease or increase in resistance. Carbon film and composition resistors can fail (open circuit) if running close to their maximum dissipation. This is also possible but less likely with metal film and wirewound resistors.
There can also be failure of resistors due to mechanical stress and adverse environmental factors including humidity. If not enclosed, wirewound resistors can corrode.
Surface mount resistors have been known to fail due to the ingress of sulfur into the internal makeup of the resistor. This sulfur chemically reacts with the silver layer to produce non-conductive silver sulfide. The resistor's impedance goes to infinity. Sulfur resistant and anti-corrosive resistors are sold into automotive, industrial, and military applications. ASTM B809 is an industry standard that tests a part's susceptibility to sulfur.
An alternative failure mode can be encountered where large value resistors are used (hundreds of kilohms and higher). Resistors are not only specified with a maximum power dissipation, but also for a maximum voltage drop. Exceeding this voltage causes the resistor to degrade slowly reducing in resistance. The voltage dropped across large value resistors can be exceeded before the power dissipation reaches its limiting value. Since the maximum voltage specified for commonly encountered resistors is a few hundred volts, this is a problem only in applications where these voltages are encountered.
Variable resistors can also degrade in a different manner, typically involving poor contact between the wiper and the body of the resistance. This may be due to dirt or corrosion and is typically perceived as "crackling" as the contact resistance fluctuates; this is especially noticed as the device is adjusted. This is similar to crackling caused by poor contact in switches, and like switches, potentiometers are to some extent self-cleaning: running the wiper across the resistance may improve the contact. Potentiometers which are seldom adjusted, especially in dirty or harsh environments, are most likely to develop this problem. When self-cleaning of the contact is insufficient, improvement can usually be obtained through the use of contact cleaner (also known as "tuner cleaner") spray. The crackling noise associated with turning the shaft of a dirty potentiometer in an audio circuit (such as the volume control) is greatly accentuated when an undesired DC voltage is present, often indicating the failure of a DC blocking capacitor in the circuit.
- Circuit design
- Dummy load
- Electrical impedance
- Iron-hydrogen resistor
- Piezoresistive effect
- Shot noise
- Trimmer (electronics)
- Harder, Douglas Wilhelm. "Resistors: A Motor with a Constant Force (Force Source)". Department of Electrical and Computer Engineering, University of Waterloo. Retrieved 9 November 2014.
- Farago, PS, An Introduction to Linear Network Analysis, pp. 18–21, The English Universities Press Ltd, 1961.
- Wu, F. Y. (2004). "Theory of resistor networks: The two-point resistance". Journal of Physics A: Mathematical and General. 37 (26): 6653. doi:10.1088/0305-4470/37/26/004.
- Wu, Fa Yueh; Yang, Chen Ning (2009). Exactly Solved Models: A Journey in Statistical Mechanics : Selected Papers with Commentaries (1963–2008). World Scientific. pp. 489–. ISBN 978-981-281-388-6.
- A family of resistors may also be characterized according to its critical resistance. Applying a constant voltage across resistors in that family below the critical resistance will exceed the maximum power rating first; resistances larger than the critical resistance fail first from exceeding the maximum voltage rating. See Middleton, Wendy; Van Valkenburg, Mac E. (2002). Reference data for engineers: radio, electronics, computer, and communications (9 ed.). Newnes. pp. 5–10. ISBN 0-7506-7291-9.
- Harter, James H. and Lin, Paul Y. (1982) Essentials of electric circuits. Reston Publishing Company. pp. 96–97. ISBN 0-8359-1767-3.
- HVR International (ed.): "SR Series: Surge Resistors for PCB Mounting." (PDF; 252 kB), 26. May 2005, retrieved 24. January 2017.
- Beyschlag, Vishay (2008). Basics of Linear Fixed Resistors Application Note, Document Number 28771.
- Morris, C. G. (ed) (1992) Academic Press Dictionary of Science and Technology. Gulf Professional Publishing. p. 360. ISBN 0122004000.
- Principles of automotive vehicles United States. Dept. of the Army (1985). p. 13-13
- "Carbon Film Resistor". The Resistorguide. Retrieved 10 March 2013.
- "Thick Film and Thin Film" (PDF). Digi-Key (SEI). Retrieved 23 July 2011.
- Kuhn, Kenneth A. "Measuring the Temperature Coefficient of a Resistor" (PDF). Retrieved 2010-03-18.
- "Alpha Electronics Corp. Metal Foil Resistors". Alpha-elec.co.jp. Retrieved 2008-09-22.
- Milwaukee Resistor Corporation. ''Grid Resistors: High Power/High Current''. Milwaukeeresistor.com. Retrieved on 2012-05-14.
- Mazda, F. F. (1981). Discrete Electronic Components. CUP Archive. pp. 57–61. ISBN 0521234700.
- "Decade Box – Resistance Decade Boxes". Ietlabs.com. Retrieved 2008-09-22.
- "Test method standard: electronic and electrical component parts" (PDF). Department of Defense.
- Fusing Resistors and Temperature-Limited Resistors for Radio- and Television- Type Appliances UL 1412. ulstandardsinfonet.ul.com
- Stability of Double-Walled Manganin Resistors. NIST.gov
- Klaus von Klitzing The Quantized Hall Effect. Nobel lecture, December 9, 1985. nobelprize.org
- "Standard Resistance Unit Type 4737B". Tinsley.co.uk. Retrieved 2008-09-22.
- A. K. Maini Electronics and Communications Simplified, 9th ed., Khanna Publications (India)
- Audio Noise Reduction Through the Use of Bulk Metal Foil Resistors – "Hear the Difference" (PDF)., Application note AN0003, Vishay Intertechnology Inc, 12 July 2005.
- Jung, Walt. "Chapter 7 – Hardware and Housekeeping Techniques" (PDF). Op Amp Applications Handbook. p. 7.11. ISBN 0-7506-7844-5.
- "Electronic components – resistors". Inspector's Technical Guide. US Food and Drug Administration. 1978-01-16. Archived from the original on 2008-04-03. Retrieved 2008-06-11.
|Wikimedia Commons has media related to
|The Wikibook Electronics has a page on the topic of: Resistors|
|Look up resistor in Wiktionary, the free dictionary.|
- 4-terminal resistors – How ultra-precise resistors work
- Beginner's guide to potentiometers, including description of different tapers
- Color Coded Resistance Calculator – archived with WayBack Machine
- Resistor Types – Does It Matter?
- Standard Resistors & Capacitor Values That Industry Manufactures
- Ask The Applications Engineer – Difference between types of resistors
- Resistors and their uses |
- Snoring is the noise produced during sleep by vibrations of the soft tissues at the back of your nose and throat.
- The noise is created by turbulent flow of air through narrowed air passages.
- In general, snoring is now being recognized for its potential to disturb the individuals sleep and in association with other health problems.
- In patients who snore, a more serious problem related to snoring can occur when those same soft tissues block the air passages at the back of the throat while you are sleeping.
- This interferes with the ability to breathe. This condition is obstructive sleep apnea (OSA), and it can directly affect your health.
8 Common Causes of Snoring
- The prevalence of obstructive sleep apnea increases with age.
- Most people diagnosed with obstructive sleep apnea are obese. Increased neck fat is thought to narrow the airway, making breathing more difficult.
- Men are 7-10 times more likely to have obstructive sleep apnea than women.
- More African Americans have OSA than do whites.
- Most people with obstructive sleep apnea are older than 40 years, although it is also common in children with loarge tonsils. Weight gain and a decrease in muscle tone occur with aging, and these may play a role in increasing the incidence of OSA.
- Sleep apnea is more common in women who are postmenopausal.
- Family history and genetics play a role.
- Certain neuromuscular conditions may increase the chance of obstructive sleep apnea, as do other medical conditions such as sinus infections, allergies, colds and nasal obstruction, and hypothyroidism (underactive thyroid gland).
Medically Reviewed by a Doctor on 11/20/2017
G Richard Braen, MD, FACEP
Steven C Gabaeff, MD, FAAEM
Francisco Talavera, PharmD, PhD
Thomas Rebbecchi, MD, FAAEM
Must Read Articles Related to Snoring
Alcohol intoxication is defined as when the quantity of learn more >>
More than two-thirds of Americans are overweight, including at least one in five
children. Nearly one-third are obese. Obesity is on the rise in our society ...learn more >>
Patient Comments & Reviews
The eMedicineHealth doctors ask about Snoring: |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
May 20, 1997
Explanation: The Egg Nebula is taking a beating. Like a baby chick pecking its way out of an egg, the star in the centre of the Egg Nebula is casting away shells of gas and dust as it slowly transforms itself into a white dwarf star. The above picture was taken by the newly installed Near Infrared Camera and Multi-Object Spectrometer (NICMOS) now on board the Hubble Space Telescope. A thick torus of dust now surrounds the star through which the shell gas is escaping. Newly expelled gas shells escape in beams as can be seen in the original HST image and in the recently released image shown above. This infrared image is coded in false colour to highlight two different types of emission. The red light represents hot hydrogen gas heated by the collisions of expanding shells. The blue light represents light from the central star scattered by the dust in the nebula. It takes light about 3000 years to reach us from the Egg Nebula, which is hundreds of times the size of our Solar System.
Authors & editors:
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
&: Michigan Tech. U. |
Asbestos Facts: Types of asbestos
Asbestos is a group of naturally occurring fibrous minerals. There are three types that have proven to be commercially useful.
Chrysotile – White Asbestos
Amosite/Grunerite – Brown Asbestos
Crocidolite – Blue Asbestos
Asbestos has been attractive to the construction industry because of its physical strength, thermal and electrical insulation properties, non-combustibility and resistance to chemical erosion.
Blue and brown asbestos are the two most dangerous forms and have been banned in the UK since 1985. White asbestos has been banned from general use since 1999. Although it is still used for a small number of specialised applications.
How do I know if I’ve found Asbestos?
It is impossible to determine if a material contains asbestos just by looking at it. There are over 3500 uses for asbestos in construction and building. 85% of commercial buildings contain asbestos in their structure. It occurs in many guises, some of the more common forms are:
- Sprayed or loose packed in ceiling voids
- Sprayed coatings and lagging of pipes and boilers
- Sprayed coating for fire protection and insulation in ducts, partitions etc and around structural steel
- Insulation boards used for fire wall partitions, fire doors and ceiling tiles
- Asbestos cement for roof sheeting, cold water tanks, gutters, pipes and in decorative plaster finishes
Asbestos can also be found in brake linings, floor coverings, fire blankets, toilet cisterns, storage heaters and a wide range of other products.
Asbestos Facts: I’ve found Asbestos – what do I do?
Do you think you have found Asbestos? Please call Zeras to get some advice. We may need to conduct a survey. An asbestos survey is an effective way to help you manage asbestos in your premises. Our surveys provide accurate information about the location, amount and type of any asbestos-containing materials (ACMs).
While not a legal requirement, it is recommended that you arrange a survey if you suspect there are ACMs in your premises. Alternatively, you may choose to presume there is asbestos in your premises and would then need to take all appropriate precautions for any work that takes place. However, it is good practice to have an asbestos survey carried out so you can be absolutely sure whether asbestos is present or not.
An asbestos survey identifies:
- The location of any asbestos-containing materials in the building
- The type of asbestos they contain
- The condition these materials are in
Asbestos Facts: Is my health at risk?
Asbestos is classed as a hazardous substance. It only causes risk to human health when the fibres are released and inhaled. In other words, asbestos in good condition is not a risk unless it is disturbed in some way that would release fibres. For example, during drilling the fibres may be released in a cloud that is not visible to the human eye. As a result, some of the microscopic fibres are small enough to pass through the lungs and into the lower abdominal cavity.
Asbestos kills over 4000 people each year in the UK. Furthermore, this figure is expected to rise to over 10,000 in the next three years. Inhaling the fibres causes diseases such as asbestosis, lung cancer and mesothelioma.
Symptoms can occur from 15 to 60 years after exposure. Most noteworthy: there is no treatment for asbestos related disease. It is known that high level exposure will cause disease; however the consequences of low level exposure are not fully understood. |
Icy Visitor Makes First Appearance to Inner Solar System
Comet ISON, May 2013:
One of several Hubble observations
For thousands of years, humans have recorded sightings of icy visitors sweeping across Earth's skies. These celestial wanderers are comets, dusty balls of ice that have traveled billions of miles from their frigid home in the outer solar system. They periodically visit the inner solar system during their long, looping journeys around the Sun. These "dirty snowballs," as they are sometimes called, hail from the Oort Cloud, a swarm of billions to trillions of comets that surrounds our solar system.
The most famous comet to appear in our skies is Halley's Comet, which visits the inner solar system every 76 years. Now, another comet is making an appearance, and you just might have a chance to see it this fall.
Comet ISON's grand entrance
Comet ISON in April:
Another Hubble view
Comet ISON is making its first voyage into the inner solar system, and has traveled for about 5 million years from its home in the Oort Cloud. Officially named Comet C/2012 S1, it has been nicknamed for the organization of its discoverers. ISON stands for the International Scientific Optical Network, a group of observatories in ten countries who have organized to detect, monitor, and track objects in space.
Astronomers have been tracking the comet with many telescopes, including the Earth-orbiting Hubble Space Telescope, since it was first detected in September 2012. Hubble has made a number of observations of Comet ISON over the past several months, examining its size and the structure of the surrounding cloud of gas, called the "coma." The coma consists of ices evaporated from the surface of the comet, which are then pushed back by the solar wind into a tail.
Calling all comet watchers
A comet's anatomy
changes as it approaches the Sun
Beginning in late October, sky watchers might not need a professional telescope to view the comet. Comet ISON may become bright enough to be seen with binoculars or a backyard telescope. Through November, the time to view the comet is in the morning before sunrise.
The first weeks of December should be the best show, if the comet survives its very close approach to the Sun on November 28. Comets are unpredictable. The Sun’s heat could break up Comet ISON, making it dimmer than expected.
If the comet survives its brush with the Sun, it could develop a long tail and brighten to the point where it can be seen by the unaided eye. In December, the comet will appear in both the early morning and early evening in the northern hemisphere, but it will rise with the Sun in the southern hemisphere. After that, it will start fading fast as it travels farther away from Earth.
Comets visible to the human eye are rare. The most recent naked-eye comet was Comet McNaught in 2007, largely visible in the southern hemisphere.
All eyes on ISON
When and where can I see Comet ISON?
Observatories, such as Hubble, will continue to take images of the comet. Hubble will observe Comet ISON again during October. Astronomers are using Hubble to study the comet’s icy nucleus, shrouded deep within the gaseous coma. Based on Hubble images, astronomers have estimated that the nucleus is only three or four miles across. The size is important, as a larger comet is more likely to survive its close passage by the Sun.
Of course, many other telescopes around the world will be watching as well. In fact, during early October, the viewing will literally be “out of this world.” NASA missions at Mars will be looking as Comet ISON sweeps past the red planet. More than a dozen NASA missions, both at Earth and Mars, will join the observing campaign, adding to the data from thousands of ground-based telescopes. What would really be nice is if billions of human eyes can join the viewing as well.
To find out more about Comet ISON, check out the ISONblog at |
2: Motion and Forces
2.1: Use measurements to develop an understanding of the concepts of speed, velocity and acceleration; distinguish translation from rotation. Use the concept of force as described by Newton's laws to predict how these quantities are influenced. Describe the gravitational force and its role in the motion of terrestrial and celestial objects. Describe forces by fluids, including the concepts of pressure and buoyancy.
Fan Cart Physics
Inclined Plane - Rolling Objects
Shoot the Monkey
3: Conservation Principles: Momentum, Energy and Mass
3.1: Analyze experiments that illustrate the law of conservation of energy and the law of conservation of momentum. Describe qualitatively and quantitatively the concepts of energy, work and power to describe the exchange of energy in systems. Know the circumstances under which mass is conserved.
Energy of a Pendulum
Inclined Plane - Sliding Objects
Roller Coaster Physics
4: Temperature and Thermal Energy Transfer
4.1: Distinguish thermal energy from temperature. Describe thermal energy transfer from one object to another by conduction, convection and radiation. Use the molecular kinetic theory of matter to describe the properties of gases and to describe the exchange of energy during phase changes. Apply the concepts of conservation of energy to include thermal energy.
Energy Conversion in a System
Temperature and Particle Motion
5: Vibrations, Waves and Sound
5.1: Describe the fundamental characteristics of mechanical vibrations and waves, and understand the relationships between frequency, period, amplitude, wavelength and wave speed. Distinguish longitudinal from transverse waves. Recognize that wave speed depends on the properties of the medium through which the wave travels.
6: Electricity and Magnetism
6.1: Describe the electrical forces between charged objects. Use the concept of the electric field to explain the interaction between charged particles. Develop a working model, through experiments with electrical circuits, of patterns of current flow, and of resistance, voltage and power. Describe the relationship between magnetism and electric current, and distinguish AC from DC electricity.
7: Wave Nature of Light
7.1: Describe the wave nature of light and the parts of the electromagnetic spectrum, as well as diffraction and polarization. Explain the formation of shadows, specular and diffuse reflection, and refraction and image formation by lens and mirrors.
Ray Tracing (Lenses)
Ray Tracing (Mirrors)
8: Atomic and Subatomic Particles
8.1: Describe the structure of the atom using Bohr's theory. Describe the parts of the nucleus and the basis for fission, fusion and nuclear energy. Explain that sub-atomic particles constitute the limit of our knowledge of matter and energy.
Bohr Model of Hydrogen
Bohr Model: Introduction
Correlation last revised: 1/20/2017 |
The story of the Maya begins during the Fourth Ice Age about 60,000 years ago. At this
time the earth's ice caps were much larger than today, glaciers extended as far south as
the central United States and no tropical climate existed anywhere on our planet. The
so-called tropics were covered with savannah and grassland. So much water was trapped in
the ice caps that the level of the sea was lower than today and a land bridge about
miles wide connecting Asia and North America at the Bering Strait was exposed. The first
humans to inhabit the Americas came across this land bridge. At first, travel south was
impeded by vast walls of ice but gradually, as the ice melted, people began to spread
It is believed the first humans reached Central America about 15,000 years ago. The
first identifiable culture, Clovis, existed around 10,000 BC. Some stone tools dating back
to 9,000 BC have been found in Guatemala. Around this time, the Fourth Ice Age was drawing
to a close and the climate was gradually warming up enabling humans to begin eating more
plants and less meat. This change was underway around 8,000 BC.
From 8,000 BC to 2,000 BC the inhabitants of Central America gradually became more
agrarian and they domesticated beans, corn, peppers, squash and other plants. During this
time there was still no jungle, just savannah and grassland and some trees. Evidence
indicates that a tropical jungle climate appeared in Central America only quite recently,
after the Mayan civilization was well underway. Towards the end of this period, some
recognizably Mayan villages appeared and pottery and ceramics appeared. Some villages had
The period from 1500 BC to 300 AD is called the "Pre-Classic" period of Mayan
culture. During this period the Mayan language developed. The Mayans experienced
population growth and larger towns were constructed.
Meanwhile, the Olmec culture was developing in southern Mexico. The Olmec is viewed as
the "mother culture" in Central America; They developed a system of writing, the
long-count calendar and a complex religion. The Olmecs had a considerable influence on the
fledgeling Maya culture. The Maya adopted many of the Olmec skills and practices and
developed them further. It seems that the mixture of the Olmec and Mayan cultures touched
off an explosion of cultural development. Archaeologists are not sure of the cause but
from 300 BC to 300 AD, tremendous development occurred in architecture, writing,
and calendrics throughout Mayan lands and the population increased. The great cities of El
Mirador, Kaminaljuyú, Río Azúl and Tikal all were founded during this time. Mayan cities
often went to war against each other.
The Classic Period of Maya development is the 600 years from 300 AD to 900 AD. The Maya
refined the long-count calendar and developed a more advanced written language. The Maya
had a tendency to tear down buildings and temples and rebuild new ones over the rubble of
the old. Some buildings are built on several layers of previous buildings. All of the
great Mayan cities as they appear today were built during the Classic Period, over the
remains of previous construction. Architecture and culture blossomed during the Classic
Period. The Maya began to accurately record important events on carved stelae. Excellent
examples of Mayan stelae and art can be seen at Quirigua, an easy day-trip from Rio Dulce.
Early in the Classic Period, around 400 AD, the Maya became heavily influenced by the
civilization of Teotihuacan to the north. Teotihuacan was the most powerful culture in
Central Mexico. Much about this relationship is unclear but it appears to have been
beneficial to both civilizations because both prospered and developed at this time.
Evidence also exists that there was interaction and trade between Central American
cultures and European, African and Polynesian cultures -- well before the time of
Columbus. For more about this see Trans-Oceanic Diffusion.
Around the year 650 AD the civilization of Teotihuacan collapsed. This collapse
triggered an upset in the Mayan civilization. Apparently there was a struggle to fill the
power-vacuum left by the collapse of Teotihuacan. Now free of its relationship to
Teotihuacan, the Maya reached their highest levels of sophistication. Art, astronomy and
religion reached new heights. The population grew and cities expanded in this era of
greatest Mayan prosperity. Astronomy and arithmetics advanced and the Mayans were able to
measure the orbits of celestial bodies with unprecedented accuracy. The Maya predicted the
motions of Venus to a degree of precision only equaled in recent times. The Maya traded
with cultures as far away as South America and the southern US. Mayan cities were much
larger and more populous than any city in Europe. The Mayas greatest artistic works in
pottery and jade were made during this pinnacle of Mayan development.
Looking at the grey ruins of Mayan architecture today, it is hard to imagine that they
were originally painted in bright colors, red, white, yellow and green, inside and out.
Certain internal chambers have been preserved and microscopic traces of paint on the
stonework have enabled archaeologists to reconstruct what Tikal and other sites
probably looked like.
However, this peak of Mayan development was to be short lived. By 750 AD problems arose
and the collapse was underway. There are many theories about what happened. By this time,
the climate was certainly changing from grassland and savannah into the tropical climate
we now associate with Guatemala. Perhaps there were food shortages. In any event, the
population dropped and the cities were gradually abandoned. By 830 AD construction and
development had come to a halt. Some cities in Belize and Yucatan survived longer but in
Guatemala the population abandoned the cities and redistributed itself into the farming
villages of the highlands that we see today. |
New biofuels recipe: iron with a pinch of palladium(Read article summary)
Scientists have combined iron and palladium to form a new catalyst for converting biomass into fuels fit for today's gas tanks. It's part of an effort to make biofuels more energy dense, and therefore more competitive with fossil fuels.
Biofuels are renewable and clean alternatives to fossil fuels, but they can be difficult to produce because their source, biomass, contains a fair amount of oxygen. That makes them less stable, too viscous and less efficient than the fuels they’re meant to replace.
Using iron as a catalyst to remove the oxygen is inexpensive, but the water in organic biomass can rust the iron, canceling its effectiveness. Another metal, palladium, is rust resistant, but it’s not as efficient as iron in removing the oxygen, and it’s far more expensive than plentiful iron.
So researchers at Washington State University (WSU) and the U.S. Department of Energy’s Pacific Northwest National Laboratory (PNNL) decided to combine the two. (Related: New Cellulosic Ethanol Plant Commercializes Renewable Fuel)
Evoking images of Julia Child in a lab coat and goggles, they added just a pinch of palladium to iron, a recipe that efficiently removes oxygen from biomass without the rust. A meal fit for a gourmet, as it were, at the cost of a cheeseburger.
The paper on their work was chosen as the cover story in the October issue of the scientific journal ACS Catalysis. In it, the researchers said they discovered that combining iron with very small amounts of palladium helped to cover the catalyst’s surface with hydrogen, which accelerates the process of turning biomass into biofuel.
“With biofuels, you need to remove as much oxygen as possible to gain energy density,” Yong Wang, who led the research, told the WSU news department. “Of course, in the process, you want to minimize the costs of oxygen removal.”
Kitchen metaphors aside, Wang’s team didn’t limit themselves to skillets, knives and can openers, but relied instead on high-resolution transmission electron microscopy, X-ray photoelectron spectroscopy and extended X-ray absorption fine structure spectroscopy. In other words, very sophisticated gear.
These tools led them to understand how the atoms on the surface of the two different catalysts – one made solely of iron, the other made solely of palladium – react with the biomass lignin, the woody material found in most plants. Wang said this led to the idea of combining the two metals. (Related: U.S. Firm Angers Dubliners With Plan For Waste-to-Energy Generator)
“The synergy between the palladium and the iron is incredible,” said Wang, who holds a joint appointment with Pacific Northwest National Laboratory and WSU. “When combined, the catalyst is far better than the metals alone in terms of activity, stability and selectivity.”
The goal of the research is to create what are known as “drop-in biofuels” – direct substitutes for gasoline, diesel fuel and jet fuel that can be used interchangeably with fossil fuels in today’s vehicles. So far, that effort has failed because today’s biofuels have too much oxygen and are thus less efficient than fossil fuels and can even damage systems built for fossil fuels.
To date, Wang’s team has converted biomass into biofuel only in a laboratory. Now, he said, he’d like to expand his work and move it to an environment that’s more like a biofuel production plant.
By Andy Tully of Oilprice.com
More Top Reads From Oilprice.com:
- The Global Outlook For Biofuels
- Rethink Biofuel Sources, Not Biofuels Subsidies
- Biofuel Industry Presses White House To Strengthen Renewable Fuel Program |
Dr. Sean S Da Silva, Medical Superintendent, Cornea, Cataract, and Refractive Surgeon, The Eye Foundation, Bangalore India talks about the common eye problems seen in children.
Refractive errors are vision problems that arise due to changes in eye shape that prevent the eye from focusing properly. The change in eye shape can include the length of the eyeball, corneal shape changes or ageing of the lens.
The common refractive errors include:
- Myopia (nearsightedness): The distance vision is blurry, but the close vision is clear. It is inherited and discovered in childhood.
- Hyperopia (farsightedness): The close vision is blurry, but the distance vision is quite clear.
- Astigmatism: An individual with astigmatism experiences distorted or blurred vision. It causes blurry vision at all distances.
Everyone, especially children should be examined for any refractive errors. A child is asked to read a chart with different lenses, to find out the issue. It is possible to have more than one refractive error at the same time.
“A child might experience difficulty in seeing, reading or experience blurred vision. Some other symptoms can include headaches, eyestrain, double vision, squinting, haziness, glare or halos around bright lights. Most often, poor academic performance could be a result of weak eyes. A child might be facing difficulties while reading and seeing, but is unable to pinpoint the cause of it. Parent and teachers should get the child’s eye tested for vision problems as soon as possible.”
Refractive errors are usually corrected by prescription glasses or contact lenses.
Contact your eye doctor immediately if you notice any sudden changes in vision. |
Behaviorists believe that all behaviors are acquired through conditioning that occurs in response to interactions with the environment. They therefore conclude that environmental stimuli can be used to train, shape and change behaviors, according to Kendra Cherry, author of “Everything Psychology Book.”Continue Reading
Behaviorists also believe that internal mental states such as emotion and cognition are too subjective for study. Consequently, they seek to investigate only behaviors that are observable and measurable using scientific and systematic methods.
There are two major types of behavioral conditioning. The first is classical conditioning. It takes a naturally occurring stimulus and response, and pairs them with a neutral stimulus. The previously neutral stimulus eventually evokes the same response without the presence of the natural stimulus.
The second type of conditioning is operant or instrumental conditioning. This method of learning occurs through reinforcements and punishments. An association is formed between a given behavior and a subsequent positive or negative consequence, depending on whether the behavior is to be encouraged or discouraged.
Therapeutic techniques rooted in behaviorism include discrete trial training, intensive behavioral intervention, behavior analysis and token economies. Psychologists use these approaches to change maladaptive and harmful behaviors in children and adults.
The principals of behavioral psychology, another term for behaviorism, are also used by animal trainers, parents and teachers who apply its theory of learning to teach new behaviors and discourage unwanted ones.Learn more about Psychology |
The innermost four planets in our own solarsystem (the terrestrial planets Mercury, Venus, Earth and Mars) are rocky and made out of silicates. However, it has been proposed by the astronomer Marc Kuchner (University of Princeton, USA), that planets made mostly out of
carbon compounds could exist. This theory has gained popularity and is said to be built on reasonable ideas.
Carbon planets may form in disks orbiting young stars where either there is a lack of oxygen, or there is an abundance of carbon. When old stars "die", they spew out large quantities of carbon. It's quite possible that there are many carbon planets orbiting stars close to the ancient galactic center, possibly even in globular clusters orbiting the Milky Way galaxy.
As times passes and more and more generations of stars end their existence as stars, the carbon which was fused in the core will get more abundant. Thus, the concentration of carbon planets will increase. Maybe at a point, all planets formed will be carbon planets.
Until we know more about carbon planets, there will be many speculations. Some of the information presented below is based on a short e-mail conver-
sation I had with Marc Kuchner in march 2005, and on the report released from the Aspen conference (link found below).
Physical properties and anatomy
Their carbonaceous nature would make them different from silicate planets. The atmospheres might be smoggy, filled with carbon mono- or dioxide and other gases.
If the temperatures are low enough (below 350 K), it is possible that gases could photochemically synthesize into long-chain hydrocarbons, which can rain down to the surface. These hydrocarbons range from compounds like methane (which can easily freeze, if the temperature is cold enough) to gasoline, crude oil, tar, or asphalt.
The surface might be covered with tar-like precipitation. There might be a great lack of water on a carbon planet.
The equivalences of the geological features present on Earth, such as mountains and rivers will likely be present on a carbon planet too, though with different compositions. The rivers could consist of oils for example and the mountains of diamonds and silicon carbides.
As long as water is somehow supplied to these planets (from cometary impacts for example), carbon planets should be able to support life.
Below the crust where the pressure is high enough, it is very likely that a thick layer of diamonds exists. It is possible that during volcanic eruptions diamonds from the interior would come up to the surface, creating mountains of diamonds and silicon carbides.
At the center carbon planets may have an iron core, or possibly a core of steel since the carbon may have reacted with the iron. The layer above the core will contain carbides (silicon and titanium carbides) that might be molten.
Back to top.
Known Carbon planets
Presently there are no known carbon planets, though astronomers suspect that the three earth-sized planets found around the pulsar PSR 1257+12 may be carbon planets. They could have formed after the supernova explosion. There is also a possibility that the planets were gas giants prior to the supernova explosion and that they were stripped of their gas layers, leaving only the cores behind.
Some of the Neptune-sized planets could also be carbon planets. Carbon planets are believed to be frequent near the center of the galaxy, since the concentration of carbon is higher there.
NASA is planning on launching a mission, called TPF - Terrestrial Planet Finder in the year 2015. This observatory which will be much larger than the Hubble Space Telescope will be able to detect such planets. The spectra of these planets would lack water, but show the presence of carbon monoxide, methane and other carbonaceous substances.
More: March Kuchner's report - TPF - Carbon planet interior.
Back to top.
Previous: Extrasolar Missions: Detecting Extrasolar Planets.
Next: Gas Giants.
Space art 1: A hypothetical extrasolar carbonplanet. You are seeing the south pole of the planet to the left, where methane has conden-
sed into ice.
The white dots on the surface are ref-
lections from layers of diamonds on the surface. The seas found at the center of the image are made of oil of various hydrocarbons. The dark areas on the planet are marks of tar-like precipitation.
The clouds are also made of various hydrocarbons. |
They believed that by making the world safe for democracy abroad, that they would prove their mettle at long last and come back and … have democracy here at home. They returned in 1919 to what be came known as the ‘Red Summer.’ There were so many race riots up in the Northern states and the brutal, terrible lynchings that occurred in the South. And the lynchings became endemic, so much so that they began to almost to become a separate judicial system in the Southern states. So what these soldiers returned to really was a situation … even worse than when they had left. Soldiers were lynched and burned while wearing their military uniforms.
Image via the US Army Center for Military History |
The title of “oldest evidence of life” has been provisionally claimed by a growing and confusing crowd of discoveries recently. At least until the last few years, the crown rested comfortably on a 3.47 billion-year-old rock from Western Australia called the Apex Chert. First described in the early 1990s, this rock contained a variety of microscopic structures that looked for all the world like the fossilized remains of microbial life.
Like other finds in this category, the Apex Chert has seen its fair share of controversy as researchers skeptically poked and prodded. Just two years ago, we covered a study that concluded these microfossils were simply clever lookalikes created by minerals crystallizing near a hydrothermal vent. In that version of events, some carbon (which may or may not have come from living things) stuck to vaguely microbe-shaped mineral crystals.
A recent study led by William Schopf—who discovered the Apex Chert in the first place—brings newer tools to bear on the question. And the researchers believe the results show that these microfossils are not impostors.
Schopf and his team subjected 11 purported fossils from the original sample to an incredibly precise spot-measurement instrument that can determine the mix of carbon isotopes that are present. (It’s the same instrument that we once visited, in fact.)
The first question is simply whether the carbon in the fossils—and the random carbon particles that can be found around them—matches the isotope signature of carbon from living organisms. Biology is somewhat choosy when it comes to isotopes of carbon. The extra neutron in carbon-13 causes organisms to prefer its lighter version; non-biological chemical reactions are typically more indiscriminate. So an unusually low share of carbon-13 is an indicator of biological carbon.
All the carbon in the samples passes this test. And the carbon inside the fossils contained even less carbon-13 than the random bits of carbonaceous stuff outside the fossils.
But the most interesting comparison is between the relevant fossil specimens. In the original study, Schopf identified five different types of fossils in the Apex Chert, which he suggested corresponded to five different species or types of microbial organisms. It turns out they each had distinct carbon isotope signatures. If these fossils were just lookalike mineral crystals coated in carbon, you would expect to see no consistent carbon isotope pattern—they should all be roughly the same. But if these were different types of organisms subsisting on different chemical fuels, it would make sense to see variations in the carbon isotopes.
The isotope signatures can actually hint at what these organisms would have been like. Two of them are within the range of photosynthetic, single-celled life.
The other three would match up with an interesting pair: methane-producing archaea and methane-consuming bacteria. That would be pretty cool, as the existence of these two types of life have been guessed at from carbon isotope measurements of very old rocks but never pinned to specific microbial fossils. Their presence would hint at the diversity of life, even in the early days. Then again, recent studies have claimed to find evidence of life from 3.7 or even 3.95 billion years ago—and that would make 3.47 billion-year-old lifeforms comparative spring chickens.
But as for the truth about the Apex Chert, the authors argue there is simply too much consistent evidence supporting the conclusion that these are real microbial fossils. The multiple lines of evidence make it more difficult to find a plausible non-biological explanation. They’ll now probably keep their place in an exclusive VIP section unless a better objection comes along. |
Fungi is made of thousands of hyphae which are fiberlike cells. The hyphae forms a mass called a mycelium.
There are more than 100,000 species of plants. The more numerous varietiesmake up mushrooms, molds, and mildews.
Fungi lives most everywhere. Many fungi are parasites. Some fungi live on decaying material. These are called saprophytes. Others live with other plants where all benefit. This mutual benefit is known as sybiosis.
Since fungi cannot produce their own food, they must take protein, carbohydrates,minerals, and other nutrients from plants, animals, or from decaying matter on which they live.
Fungi, for the most part, reproduce by the formation of sexual reproduction(with male and female cells) and some are reproduced by asexual reproduction(without union of male and female cells).
Yeast can produce these spores, but usually yeast produce by budding.
Fungi are important because of their ability to break down plant and animal matterin a process known as decomposition.
Fungi is important in the production of cheese. Yeast is essential to the fermentation of alcoholic beverages and bread dough. The most important use of fungi is in the production of antibiotics, which are used to fight diseases and infections. |
World Oceans Day is a global day of ocean celebration and collaboration for a better future. Worldwide, people are coming together today to discuss solutions to plastic pollution and preventing marine litter for a healthier ocean and a better future. Why celebrate World Oceans Day? The answer is striking in its simplicity; a healthy world ocean is critical to our survival.
Every year, World Oceans Day provides a unique opportunity to honor, help protect, and conserve the world’s oceans. Oceans are very important:
- They generate most of the oxygen we breathe
- They help feed us
- They regulate our climate
- They clean the water we drink
- They offer a pharmacopoeia of medicines
- They provide limitless inspiration!
For the Caribbean Region, the ocean represents the life blood for critical economic sectors including agriculture, tourism, fisheries, and transportation.
CARICOM Secretariat, Mr Christopher Corbin commented on the significance of the celebration for the region, “[It] offers an opportunity for all of us as Caribbean people to reflect on the critical importance of oceans and the Caribbean Sea to our economies and to our societies. We need to highlight the actions that we take that cause negative impacts on the Caribbean Sea and its associated coastal and marine ecosystems and the fact that these impacts are not ‘Out of Sight – Out of Mind’, but already jeopardising the provision of essential goods and services.”
The Caribbean Sea has historically been the lifeblood of Caribbean people and continues to be the basis for social and economic development whether it is in sectors such as fishing, maritime transportation or tourism.
Mr Corbin added, “World Oceans Day also offers us an opportunity to showcase new and emerging opportunities e.g. wave and tidal energy potential, international telecommunication (through submarine cables) and for making the sustainable use of coastal and marine resources an integral part of our development agenda and in so doing ensuring that measures are put in place to safeguard this resource for future generations.”
The Ocean Project has promoted and coordinated World Oceans Day globally since 2002. Contact them to find out more and get involved. |
The design and manufacturing of electronic devices capable of withstanding the severe conditions of space have assumed critical importance in the pursuit of cosmic exploration. Printed circuit boards (PCBs) designed for space missions are of the utmost importance in guaranteeing the reliability and functionality of electronic systems. The PCBs are subject to mechanical stresses, radiation, vacuum, and extreme temperatures, which presents engineers and designers with unique challenges. This article explores the intricacies of space-grade printed circuit board (PCB) design and the solutions implemented by engineers to surmount these barriers.
What are the challenges of space-grade PCBs?
To endure the severe environmental conditions of space, space-grade PCBs are specifically designed to function without exception in environments saturated with radiation, high-vibration conditions, and extreme temperatures. Precisely crafted with state-of-the-art materials and manufacturing techniques, these PCBs guarantee outstanding performance and dependability.
Space is characterized by significant variations in temperature. Satellites, probes, and rovers traverse various regions of space while being subjected to extreme temperatures and pressures. Temperatures can range from -200°C in the shadow of a celestial body to over 200°C when exposed to direct sunlight. Sophisticated thermal management techniques and specialized materials possessing low coefficients of thermal expansion are required to accomplish this.
When confronted with extreme temperatures, engineers frequently employ ceramic printed circuit boards (PCBs). Ceramic materials are more resistant to temperature fluctuations due to their low coefficient of thermal expansion. Stability and dependability are exhibited by these materials amidst the extreme temperature fluctuations that are encountered in outer space.
Ionizing radiation emanating from celestial sources and the sun permeates space, presenting a substantial peril to electronic components. Radiation can disrupt the functionality of PCBs and degrade the performance of semiconductors. To protect sensitive electronic components from the damaging effects of radiation, space-grade PCBs must be designed by engineers utilizing radiation-hardened materials, including ceramic substrates and specialized coatings.
During spacecraft launch and deployment, PCBs are subjected to extreme mechanical stresses. Structural damage may result from the vibrations generated throughout the launch phase and the deployment of solar arrays and other components. In order to mitigate this difficulty, designers incorporate shock-absorbing mechanisms, including flexible PCB materials and conformal coatings, to safeguard the electronic components' integrity.
Frequently composed of polyimide, flexible PCBs are more effective at absorbing vibrations and disturbances than their rigid counterparts. Conformal coatings insulate the printed circuit board (PCB) from physical harm throughout the processes of launch and deployment. In addition, meticulous PCB layout design is required to equitably distribute mechanical stresses.
Created during manufacturing, outgassing is a wave soldering defect in which air becomes trapped within a PCB. The air creates cavities or blowholes that have the potential to impair the PCB's performance. This phenomenon occurs both during the wave/hand soldering procedure and when the circuit board is subjected to a high vacuum setting. PCB outgassing is frequently the result of improper material selection and defective manufacturing. Since space is a near-perfect vacuum, devoid of air or any other medium, a defect like outgassing (Figure 1) can potentially contaminate sensitive optical components, like cameras.
Figure 1: Outgassing created in the solder joints after manual soldering (Source: YouTube)
Materials utilized in the fabrication of vacuum-compatible PCBs have minimal outgassing properties. Composites such as polyimide and PTFE (Teflon) are frequently employed owing to their exceptionally low outgassing properties. These materials aid in the prevention of contamination in the vacuum of space and contribute to the electronic systems' long-term dependability.
Space and weight constraints
Due to the strict weight and size restrictions on spacecraft, the creation of electronic systems that are both compact and lightweight has become imperative. A delicate equilibrium must be maintained between size and functionality when designing space-grade PCBs; the architecture must be optimized to maximize the use of available space. By utilizing multi-layer printed circuit boards (PCBs), advanced miniaturization techniques, and three-dimensional packaging solutions, engineers are able to maintain the required performance despite these difficult constraints.
SMT and other advanced miniaturization techniques facilitate the fabrication of electronic components that are both more compact and lightweight. By addressing the challenge of limited physical dimensions, three-dimensional packaging solutions, such as System-in-Package (SiP) or chip-on-board (COB), enable the integration of multiple functions into a compact space.
Substrates for space PCBs
Specialized substrates capable of withstanding the harsh conditions of outer space are required for space-grade PCBs. The substrates most frequently employed are listed below.
Materials such as Alumina (Al2O3) and Aluminum Nitride (AlN) are commonly used ceramics. The low thermal expansion coefficients of ceramics render them exceptionally stable in the face of extreme temperatures. In addition, they are thermally conductive, which aids in heat dissipation. Ceramic substrates offer a resilient solution for space applications due to their intrinsic resistance to radiation.
Glass ceramic materials, including Low-Temperature Co-Fired Ceramics (LTCC), are distinguished by their superior electrical properties, minimal thermal expansion, and high thermal conductivity. LTCC is particularly well-suited for applications that necessitate the integration of numerous components into a solitary package and miniaturization.
Polyimide is a flexible and lightweight polymer. Polyimide substrates are well-suited for flexible PCBs, which can absorb mechanical stresses during launch and deployment. They also have good thermal stability, allowing them to withstand temperature variations. However, polyimide may not be suitable for applications with high radiation exposure.
To mitigate the effect of radiation, that can cause electronic components to malfunction or degenerate over time, radiation-resistant materials are incorporated into space-grade PCBs. An example of such a substance is radiation-hardened epoxy laminate. A laminate of this nature has been specially formulated to resist the damaging impacts of ionizing radiation. Its electrical and mechanical properties are intentionally preserved, even when exposed to radiation. The increased resistance of radiation-hardened epoxy laminates to radiation-induced degradation ensures the durability and dependability of space-grade PCBs.
Additionally, copper alloys that possess exceptional resistance to radiation are utilized in space-grade printed circuit boards. Alloys such as copper-tungsten (CuW) and copper-molybdenum (CuMo) provide enhanced durability against degradation and embrittlement caused by radiation. They aid in the preservation of the electrical performance and structural integrity of PCBs in environments with high levels of radiation.
Copper foils are vital to the functionality of space-grade PCBs. Conductive layers are implemented in order to facilitate the transmission of electrical signals. High-performance copper foils are utilized in space-grade printed circuit boards (PCBs) to assure optimal signal integrity and reduce signal loss.
High-performance copper foils are distinguished by a number of essential qualities. Their elevated thermal conductivity facilitates effective dissipation of heat away from the PCB. In space applications, where components generate substantial radiation, this is of the utmost importance. In addition to having a low insertion loss, these foils reduce signal distortion and attenuation. This guarantees consistent signal transmission, even in applications involving high frequencies.
Metal Core PCBs
PCBs with a metal core, comprising a copper or aluminum core surrounded by a dielectric layer, are well-suited for applications that require efficient heat dissipation due to their high thermal conductivity. This is crucial in the space environment, where temperature control is difficult.
Rogers RO4000 series
A family of high-frequency laminates, including RO4350B and RO4003C, Rogers' materials are specifically engineered for use in microwave and RF environments. At high frequencies, the Rogers RO4000 series provides exceptional electrical efficacy. Frequently, these substrates are utilized in space missions that demand RF and microwave capabilities.
Polytetrafluoroethylene (PTFE) is widely recognized under the trade designation Teflon. Due to its low loss tangent and dielectric constant, this material is appropriate for high-frequency applications. Moreover, its exceptional resistance to chemicals and minimal outgassing characteristics render it well-suited for utilization in vacuum environments such as space.
Although FR-4, a prevalent epoxy-based substrate reinforced with glass fibers, finds extensive application in commercial printed circuit boards (PCBs), high-Tg (glass transition temperature) FR-4 variants are engineered to endure elevated temperatures. Compared to conventional FR-4, they exhibit greater stability in the face of extreme temperatures, rendering them viable for specific space applications.
Selected with care, space-grade PCB surface finishes provide oxidation resistance and guarantee dependable solder junctions. Frequently employed are immersion silver and immersion gold finishes (Figure 2).
A silver coating achieved through immersion offers superior conductivity and resistance to corrosion. By establishing a barrier on the copper traces, it effectively safeguards against oxidation and guarantees dependable electrical connections. Silver in immersion is ideally suited for high-frequency applications due to its excellent signal integrity and minimal insertion loss.
In contrast, immersion gold finish furnishes a surface that is exceptionally dependable and long-lasting for solder connections. The oxidation resistance of gold guarantees its exceptional solderability and long-lasting stability. In addition to its high electrical conductivity, immersion gold finish is frequently employed in applications that demand connections with exceptional dependability.
Figure 2: A PCB treated with immersion gold finish (Source: Linkedin)
Complex and arduous, the design of space-grade PCBs necessitates an in-depth comprehension of the extremely harsh conditions that exist in space. Temperature extremes, radiation exposure, vacuum conditions, mechanical stresses, and stringent size and weight restrictions are all obstacles that engineers must overcome. Through the implementation of cutting-edge materials, effective thermal management strategies, radiation-resistant components, and scrupulous design methodologies, engineers are able to fabricate electronic systems that possess the ability to endure the demanding conditions inherent in space exploration.
and get your PCBA quote within the next 10 minutes! |
The burners are the "business end" of your furnace. These shockingly simple components are the final step in the combustion process, where the correct ratio of air and fuel mix to produce heat for your home. A typical modern single-stage furnace uses a horizontal burner assembly composed of multiple individual burner units.
Each burner effectively acts as a nozzle, taking a pre-metered amount of natural gas and pushing it toward the combustion chamber. Once at the end of the burner, the gas mixes with air and ignites, producing energy the heat exchanger can extract to warm your home. However, despite their relative simplicity, furnace burners are still a common source of issues.
How Do Your Furnace's Burners Work?
In addition to the burners, two other critical components come into play when your furnace lights: the igniter and the flame sensor. While the igniter's role may be obvious, the flame sensor can be slightly more obscure. This device proves that your furnace's burners have successfully ignited, an essential step to prevent gas leaks from unlit burners.
Despite standard furnaces containing multiple individual burners, a typical furnace only has a single flame sensor and igniter. A typical configuration places the igniter at one end of the burner assembly and the flame sensor at the other. Channels in the burner assembly mean that once the first burner ignites, the rest will follow one after another.
This design means the flame sensor only needs to prove ignition on the final burner. If the final burner ignites, the other burners in a line before it must have also ignited. Conversely, an issue that prevents one or more burners from igniting usually stops the final burner from lighting, and the furnace's flame sensor will trigger a shutdown.
Why Do Burners Fail?
Burner failures typically result from underlying physical problems. Most burner and burner assemblies are metal, so rust is a common problem. Issues that result in incomplete or inefficient combustion can also create excessive soot, which can clog the orifices in the burners and prevent complete or consistent ignition.
Other issues include clogs in the channels between burners. These problems may technically not prevent the burners from working, but they will prevent the flame from igniting burners further down in the assembly. As a result, the flame sensor will fail to detect a flame on the final burner in the line and your furnace will shut down.
Burner issues can range from relatively minor problems that only require cleaning to more substantial issues that require an HVAC technician to replace the entire assembly. Depending on your furnace's design, you may also be able to replace individual burners. Since burners deal with the furnace's gas supply, the safest option is to allow a professional to conduct these repairs.
For more information on furnace services, contact a professional near you. |
Learning Disabilities Resources
This unit helps students understand how we learn, and that learning disabilities are brain-based and result in one or more areas of significant challenge. Students learn that someone who has a learning disability can use a variety of useful strategies, techniques and technology to assist in learning within their area of challenge and in daily life.
- Having a learning disability is among the many traits that contribute to making a person the individual that he or she is.
- All of us have strengths and challenges in learning. A strength for one person may be a challenge for another person. Each individual with a learning disability is unique, with a combination of strengths and challenges.
- People develop strategies to accommodate for their challenges by using areas of strength, and may use specific techniques or assistive technology to accomplish their learning goals.
- People with learning disabilities might feel frustrated at times when learning, but it can feel especially rewarding when a learning goal is accomplished, too.
- People with learning disabilities do lots of things like play sports and participate in other activities, sometimes with accommodations.
Fish In A Tree by Lynda Mullaly Hunt
For grades 4-8
In this book, sixth grader Ally has been smart enough to fool a lot of smart people. Every time she lands in a new school, she is able to hide her inability to read by creating clever yet disruptive distractions. She is afraid to ask for help but her newest teacher, Mr. Daniels, sees the bright, creative kid underneath the trouble maker. With his help, Ally’s confidence grows and she feels free to be herself. She discovers that there’s a lot more to her—and to everyone—than a label, and that great minds don’t always think alike.
Questions to consider:
- What are Ally’s strengths and challenges? What are yours?
- What strategies does Ally use to help with her schoolwork?
- How does Ally learn differently?
- Do you agree that everyone is smart in different ways? Why?
For Frequently Asked Questions with author Lynda Mullaly Hunt go to: https://www.lyndamullalyhunt.com/for-readers/faq-about-fish-in-a-tree/
Are you ready to do something without using a screen? Try this fun activity where you can celebrate differences by making your own unique fish in tree!
- Download a fish picture below (or two or three) and print it.
- Make it your own unique fish by coloring it, adding other elements by glueing them on, and any other ideas you might have. Be creative!
- Make a tree by drawing it or cutting it out of paper
- Cut out your fish and put it in your tree.
Click on a fish to get the downloadable pdf: |
Do you have ringing in your ears that’s driving you mad? Discover whether your tinnitus is inherited or what the cause might be.
What is tinnitus?
Tinnitus is the name referring to a person’s perception of a ringing, droning, or buzzing in the ear with no external stimulus present to explain this sensation. The term tinnitus translates to “ringing like a bell.”
How will my day-to-day living be impacted by tinnitus?
Tinnitus can disrupt personal connections in numerous aggravating ways. It’s not a disease in and of itself, but it’s a symptom of other conditions or conditions in your life including hearing loss or injury. You may hear tinnitus in one ear or both ears and it can hinder your ability to concentrate.
Tinnitus is always disruptive regardless of how it’s manifesting. Tinnitus can affect your sleep and even cause anxiety and depression.
What causes tinnitus?
Tinnitus can be long lasting or it can come and go. Short term types of tinnitus are typically caused by extended exposure to loud sounds, like a rock concert. There are a few medical issues that tend to go hand-in-hand with tinnitus.
Here are several conditions that generally accompany tinnitus:
- Anxiety or depression
- Acoustic neuroma where a benign tumor grows on the cranial nerve running from the inner ear to the brain
- Teeth grinding (bruxism) related to a TMJ disorder
- Excessive earwax accumulation
- Hearing loss related to aging
- Meniere’s Disease
- Exposure to loud sound for prolonged time periods
- Inner ear cell damage and irritation of the delicate hairs used to transport sound, causing arbitrary transmissions of sound to your brain
- Numerous medications
- Inner ear infections
- Injuries that impact nerves of the ear
- Trauma to the neck or head
- Changes in the composition of the ear bone
Is it possible that my parents could have passed down the ringing in my ears?
Generally, tinnitus isn’t an inherited condition. But the symptoms can be influenced by your genetics. For example, ear bone changes that can result in tinnitus can be passed down. Irregular bone growth can cause these changes and can be handed down through genes. A few of the other conditions that can cause ringing in the ear might be inherited from your parents, including:
- Being prone to inner ear infections or wax build-up
- Predisposition to anxiety or depression
- Certain diseases
You can’t directly inherit tinnitus, but there are disorders that become breeding grounds for tinnitus which you could have inherited.
If your family has a history of tinnitus, you should certainly come in for an evaluation. |
The “single biggest threat to UK and European nature in a generation” is looming as EU laws come under review shortly.
More than 90 voluntary organisations from across Europe, including Friends of the Earth, BirdLife International, and World Wildlife Fund, have come together to fight the threat to key EU laws that protect key wildlife habitats. The European Commission is reviewing the Nature Directives, putting legislation that has saved species from near extinction at risk.
The news comes just as the need to protect our species and habitats becomes crucial: 77% of Europe’s animals are threatened by habitat destruction and, over the last 30 years, Europe's green areas have lost a staggering 420 million European birds. Changes in use to rural land, overfishing and pollution mean that 25% of marine mammals, 38% of freshwater fish and 15% of land mammals all face extinction within the EU.
What do the Nature Directives do?
The Birds and Habitats Directives apply protective measures to one fifth of European land and 4% of marine sites, maintaining habitat diversity, regulating hunting within the areas and protecting migration paths and endangered species. Particular attention is paid to a network of ‘special protection areas’ known as Natura 2000, with officials closely examining the impact proposed projects or developments will have in these areas. When a proposal of overriding public interest needs to take place on the site (often for military or public health reasons) the commission can ask for compensatory measures and suggest ways to reduce environmental impact.
Weakening the legislation, which protects 27,000 sites and 1,000 species across Europe, would have devastating consequences for the animals facing extinction only a few decades ago. The white tailed eagle, which had disappeared from many regions of Europe by the 1970s, now numbers at around 10,000 pairs, while the world’s most endangered feline, the Iberian Lynx, has increased its population from only 100 animals to an estimated 230. Similarly, large carnivores including the brown bear, the wolf and the wolverine, which had almost been completely eradicated from Europe, have almost doubled in numbers in the last decade thanks to the legal protection extended to them in Natura areas.
When properly implemented, these laws work to balance the needs of ecosystems with expansion, and there is little evidence to suggest they block economic development. A cleaner, more sustainable environment is beneficial to civilian's own quality of life, and the safeguards placed around these areas account for less than 1% of administrative costs for local businesses. Farming, hunting and transport is regulated rather than banned, and the tourism and recreational activities related to these directives was estimated to bring an additional €50 to €90 billion to the economy in 2006 (Bio Intelligence Service 2011).
How can I help?
Supporters of the Nature Directives are asking for your support to help strengthen and implement these laws further rather than withdraw them. EU citizens have until the 24th July 2015 to take part in a public consultation on this issue while it examines the effectiveness of the legislation. You can have your say in one easy click as the voluntary organisations have launched a 'Nature Alert' electronic tool (see below).
As Angelo Caserta, Director of BirdLife Europe said: ‘We have the scientific evidence showing that these laws work when implemented, and numerous examples that these laws are no obstacle to any good economic development. So, my question to [the EU] is simple: with all there is to do in Europe, why undo nature laws?’
If you want to help defend European wildlife and habitats, simply opt to support the campaign. You can also make your voice heard by adding your name to the online petition: |
This waterwheel pump doesn’t come with specific building instructions, but the general idea is there.The concept is to use the energy of the flowing stream to spin the wheel and gather water into the tubing while lifting it higher than the outlet pipe.
source/image: Milan Vitéz
The bigger the diameter of the wheel, the better its ability will be to pump the water uphill. A spiral pump is constituted of a pipe wrapped around a horizontal axle, generating a spiral tube that is fastened to a water wheel.
The water wheel is in flowing water, so that the water in the river provides the energy necessary for the rotation of the wheel.Hence, the spiral tube also rotates.
When the inlet surface of the tube the tube’s external extremity passes into the river, water enters into the tube. This water volume moves toward the outlet of the tube, at the center of the wheel, where a straight tube is connected to the end-user. |
There are plenty of things that determine whether a plant survives or whether it wilts away. And many factors are out of their control. But as Tia Ghose reports for Live Science, plants may actually make a decision about one key to their survival: when to germinate.
A new study, published in The Proceedings of the National Academies of Science, suggests that plant seeds use tiny “brains” to help them decide whether it’s the right time to break dormancy. As Ghose reports, the “brains” aren’t physically similar to human or other vertebrate grey matter. Instead the seeds' control center processes information much like brains do. They use bundles of specialized cells to process hormone signals that tell them when it's prime time and they should sprout.
“Plants are just like humans in the sense that they have to think and make decisions the same way we do,” George Bassel, plant biologist at the University of Birmingham and an author on the study, tells Ghose.
The researchers examined seeds from Arabidopsis otherwise known as thale cress—a plant commonly used in studies due to its short life cycle. Seeds need to balance two important factors when germinating: temperature and competition. If they sprout too soon they could face cold temperatures and potentially freeze to death. If they wait too long, earlier-sprouting plants can outcompete them.
The seed has two hormones: abscisic acid (ABA), which sends the signal to stay dormant, and gibberellin (GA), which initiates germination. The push and pull between those two hormones helps the seed determine just the right time to start growing.
According to Ghose, some 3,000 to 4,000 cells make up the Arabidopsis seeds. So the researchers cataloged these cells in an atlas to study this system. They then monitored where the two hormones were found within the seed. It turned out that the hormones clustered in two sections of cells near the tip of the seed—a region the researchers propose make up the “brain.” The two clumps of cells produce the hormones which they send as signals between each other. When ABA, produced by one clump, is the dominate hormone in this decision center, the seed stays dormant. But as GA increases, the “brain” begins telling the seed it’s time to sprout.
This splitting of the command center helps the seed make more accurate decisions, says biomathematician Iain Johnston, who was also an author on the study. “The separation of circuit elements allows a wider palette of responses to environmental stimuli,” he says in a press release. “It’s like the difference between reading one critic's review of a film four times over, or amalgamating four different critics' views before deciding to go to the cinema.”
The latest study adds to the growing body of evidence that plant complexity has been underestimated in the past. Mounting evidence suggests that plants may have some means of rudimentary communication. Just last year, researchers discovered that a type of fungus can serve as an underground forest "internet" capable of transporting carbon, nutrients and signal chemicals between trees. There is even some evidence that plants can send signals using electrical pulses, vaguely akin to how the human nervous system works (but with many, many important distinctions).
The idea of seed "brains" not only adds to this vegetative capacity but could also have big impacts on agriculture, leading scientists to control seed germination and increase efficiency of plant growth. |
Babies who are born very prematurely or who have respiratory problems shortly after
birth are at risk for bronchopulmonary dysplasia (BPD), sometimes called chronic lung
disease. Although most infants fully recover with few long-term health problems, BPD
can be serious and need intensive medical care.
Babies aren't born with BPD. It develops when premature
infants with respiratory distress syndrome (RDS) need help
to breathe for an extended period, which can lead to inflammation
(swelling) and scarring in the lungs.
Bronchopulmonary dysplasia (brahn-ko-PUL-moh-nair-ee dis-PLAY-zhee-uh) involves
abnormal development of lung tissue. It most often affects premature babies, who are
born with underdeveloped lungs.
"Dysplasia" means abnormal changes in the structure or organization of a group
of cells. The cell changes in BPD take place in the smaller airways and lung alveoli,
making breathing difficult and causing problems with lung function.
Along with asthma and cystic fibrosis,
BPD is one of the most common chronic lung diseases in children. According to the
National Heart, Lung, and Blood Institute (NHLBI), there are between 5,000 and 10,000
cases of BPD every year in the United States.
Babies with extremely low birth weight (less than 2.2 pounds or 1,000 grams) are
most at risk for developing BPD. Although most of these infants eventually outgrow
the more serious symptoms, in rare cases BPD — in combination with other complications
of prematurity — can be fatal.
Causes of BPD
Most BPD cases affect premature infants (preemies), usually those who are born
more than 10 weeks early and weigh less than 4.5 pounds (2,000 grams). These babies
are more likely to develop RDS (also called hyaline membrane disease), which is a
result of tissue damage to the lungs from being on a mechanical ventilator for a long
Mechanical ventilators do the breathing for babies whose lungs are too immature
to let them breathe on their own. Oxygen is delivered to the lungs through a tube
inserted into the baby's trachea (windpipe) and is given under pressure from the machine
to properly move air into stiff, underdeveloped lungs.
Sometimes, for these babies to survive the amount of oxygen given must be higher
than the oxygen concentration in the air we commonly breathe. This mechanical ventilation
is essential to their survival. But over time, the pressure from the ventilation and
excess oxygen intake can injure a newborn's delicate lungs, leading to RDS.
Almost half of all extremely low birth weight infants will develop some form of
RDS. RDS is considered BPD when preemies still need oxygen therapy at their
original due dates (past 36 weeks' postconceptional age).
BPD also can be due to other problems that can affect a newborn's fragile
lungs, such as trauma, pneumonia,
and other infections. All of these can cause the inflammation and scarring associated
with BPD, even in a full-term newborn or, very rarely, in older infants and children.
Among premature babies who have a low birth weight, white male infants seem
to be at greater risk for developing BPD, for reasons unknown to doctors. Genetics
may play a role in some cases of BPD, too.
Important factors in diagnosing BPD are prematurity, infection, mechanical ventilator
dependence, and oxygen exposure.
BPD is usually diagnosed if an infant still needs additional oxygen and continues
to show signs of respiratory problems after 28 days of age (or past 36 weeks' postconceptional
age). Chest X-rays may be helpful in making the diagnosis. In babies with RDS, the
X-rays may show lungs that look like ground glass. In babies with BPD, the X-rays
may show lungs that appear spongy.
Treatment of BPD
No available medical treatment can immediately cure bronchopulmonary dysplasia.
Treatment is focused on supporting the breathing and oxygen needs of infants with
BPD and to help them grow and thrive.
Babies first diagnosed with BPD receive intense supportive care in the hospital,
usually in a neonatal intensive care
unit (NICU) until they can breathe well on their own, without the support of a
Some babies also may get jet ventilation, a continuous low-pressure ventilation
that helps minimize the lung damage from ventilation that contributes to BPD. Not
all hospitals use this procedure to treat BPD, but some with large NICUs do.
Infants with BPD are also treated with different kinds of medicines that help to
support lung function. These include bronchodilators (such as albuterol) to help keep
the airways open, and diuretics (such as furosemide) to reduce fluid buildup in the
Severe cases of BPD might be treated with a short course of steroids. This strong
anti-inflammation medicine has some serious short-term and long-term side effects.
Doctors would only use it after a discussion with a baby's parents, informing them
of the potential benefits and risks of the drug.
Antibiotics are sometimes needed to fight bacterial infections because babies with
BPD are more likely to develop pneumonia. Part of a baby's treatment may involve the
administration of surfactant, a natural lubricant that improves breathing function.
Surfactant production may be affected in babies with RDS who have not yet developed
BPD, so they might be given natural or synthetic surfactant to help protect against BPD.
Also, babies sick enough to be hospitalized with BPD may need feedings of high-calorie
formulas through a gastrostomy tube (G-tube).
This tube is inserted through the abdomen and delivers nutrition directly to the stomach
so that babies get enough calories and start to grow.
In severe cases, babies with BPD cannot use their gastrointestinal systems to digest
food. These babies require intravenous (IV) feedings — called TPN, or total
parenteral nutrition — made up of fats, proteins, sugars, and nutrients. These
are given through a small tube inserted into a large vein through the baby's skin.
The time spent in the NICU for infants with BPD can range from several weeks to
a few months. The average length of intensive in-hospital care for babies with BPD
is 120 days. Even after leaving the hospital, a baby might need continued medication,
breathing treatments, or even oxygen at home.
Most babies are weaned from supplemental oxygen by the end of their first year,
but a few with serious cases may need a ventilator for several years or, rarely, even
their entire lives.
Improvement for any baby with BPD is gradual. Many babies diagnosed with BPD will
recover close to normal lung function, but this takes time. Scarred, stiffened lung
tissue will always not work as well as it should. But as infants with BPD grow, new
healthy lung tissue can form and grow, and might eventually take over much of the
work of breathing for damaged lung tissue.
Complications of BPD
After coming through the more critical stages of BPD, some infants still have longer-term
complications. They are often more at risk for respiratory infections, such as
influenza (the flu), respiratory
syncytial virus (RSV), and pneumonia. And when they get an infection, they
tend to get sicker than most children do.
Another respiratory complication of BPD includes excess fluid buildup in the lungs,
known as pulmonary edema, which makes it more difficult for air to travel through
Occasionally, kids with a history of BPD also may develop complications of the
circulatory system, such as pulmonary hypertension in which the pulmonary arteries
— the vessels that carry blood from the heart to the lungs — become narrowed
and cause high blood pressure. But this is not common.
Side effects from being given diuretics to prevent fluid buildup can include
stones; hearing problems; and low potassium, sodium, and calcium levels.
Infants with BPD often grow more slowly than other babies, have problems gaining weight,
and tend to lose weight when they're sick. Premature infants with severe BPD also
have a higher incidence of cerebral
Overall, though, the risk of serious permanent complications from BPD is fairly
Caring for Your Baby
Parents play a critical role in caring for an infant with BPD. An important
precaution is to reduce your baby's exposure to potential respiratory infections.
Limit visits from people who are sick, and if your baby needs childcare, pick a small
center, where there will be less exposure to sick kids.
Making sure that your baby receives all recommended
vaccinations is another important way to help prevent problems. And keep
your child away from tobacco smoke, particularly in your home, as it is a serious
If your baby requires oxygen at home, the doctors will show you how to work
the tube and check oxygen levels.
Children with asthma-type symptoms may need bronchodilators to relieve asthma-like
attacks. You can give this medicine to your child with a puffer or nebulizer, which
produces a fine spray of medicine that your child then breathes in.
Because infants with BPD sometimes have trouble growing, you might need to feed
your baby a high-calorie formula. Formula feedings may be given alone or as a supplement
to breastfeeding. Sometimes, babies with BPD who are slower to gain weight will go
home from the NICU on G-tube feedings.
When to Call the Doctor
Once a baby comes home from the hospital, parents still need to watch for signs
of respiratory distress or BPD emergencies (when a child has serious trouble breathing).
Signs that an infant might need immediate care include:
faster breathing than normal
working much harder than usual to breathe:
belly sinking in with breathing
pulling in of the skin between the ribs with each breath
growing tired or lethargic from working to breathe
more coughing than usual
panting or grunting
pale, darker, or bluish skin color that may start around the lips or fingernails
trouble feeding or excess spitting up or vomiting of feedings
If you notice any of these symptoms in your baby, call your doctor or get emergency
medical care right away. |
How To Tell Stink Bugs & Kissing Bugs Apart
There are about 10 species of kissing bugs found in the U.S., with two of the more common being the conenose bug, Triatoma sanguisuga (LeConte), and the western bloodsucking conenose bug, Triatoma protracta (Uhler).
There are many species of stink bugs, but the one most likely to become a nuisance is the brown marmorated stink bug (BMSB).
Do They Bite?
Both BMSBs and kissing bugs have piercing/sucking mouthparts, but only kissing bugs bite people, pets, and other animals.
Both of these insects develop by changing from eggs, to nymphs, and then to adults. Adults have well developed wings and are strong fliers, but the immature nymphs of both insects are wingless.
- BMSBs have a shield shaped body that is about ½ - ⅝ inches long. They are mottled brown, grey and light black in color, and they have white segments on their antennae. They have two pairs of wings that are held flat over their back, and the outer edge of their abdomen is exposed since their wings do not completely cover their body. Their head is blunt and less elongated than the kissing bug’s head.
- Kissing bugs are commonly called conenose bugs because the shape of their cone-shaped heads. These insects are about ¾ - 1 inch long, are dark brown or black in color, and some species have red, yellow or tan markings on the abdomen. Kissing bug legs are long and thin, and their mouthparts extend well beyond their heads.
BMSBs prefer to feed on soybeans and fruit, and have an affinity for apple, citrus, and peach fruit trees. Stink bugs also feed on the leaves of many ornamental plants.Kissing Bugs
Kissing bugs feed exclusively on the blood they get from their vertebrate host animals. Some of their food sources are wild and domestic animals such as:
- Domestic dogs
- Sometimes people
Biology & Life Cycle
As mentioned above, BMSBs develop through three life stages – eggs, nymphs and adults.
The life cycle of BMSBs generally involves adults actively mating, reproducing and feeding during the months of spring through late Fall. However, this insect is also very active prior to the onset of cold winter weather as they seek shelter to spend the winter in a dormant phase known as diapause.
BMSBs may overwinter in many places, some of which include outdoor debris piles, dead trees and protected areas such as:
- Storage areas
- Behind siding
It is important to note that entering into diapause may not end their season of activity. If the weather warms up for long enough, indoor overwintering stink bugs might be misled into thinking it’s time to become active again.
When this happens, homeowners are likely to see BMSBs flying around windows, doors, and other sources of light in hopes of making their way outdoors.Kissing bugs like to live near nests or resting areas of their hosts. These bugs may reside inside, but usually live outdoors.
Typical outdoor locations include:
- Dog houses
- Pet kennels
- Underneath piles of rocks, wood, brush and tree bark.
These insects emerge from their daytime locations to take a blood meal during the night. They prefer to bite exposed skin, and some species of kissing bugs favor biting around a person’s face or close to the lips.
Bites to people are more or less without pain, and usually does not wake a person who is asleep. Other bite sites are hands, arms, and feet.
BMSBs are found where their preferred foods are grown in approximately 41 states.
Kissing bugs are generally found in the southern, southeastern, and southwestern states.
Damage & Medical Importance for Stink Bugs vs. Kissing BugsStink Bugs Stink bugs that feed on fruit causes a distortion of the fruit known as “cat facing.” This renders the damaged fruit worthless, or worth much less than standard market prices.
As mentioned above, BMSBs also feed on soybeans and their feeding can dramatically reduce the yield of that crop. For home and building owners, BMSBs become nuisances when they begin to seek shelter.Kissing Bugs & Chagas Disease
Kissing Bugs can transmit Chagas disease, an emerging vector borne disease in the U.S and parts of Central and South America.
During or soon after a kissing bug takes a blood meal, the insect also defecates. A person who rubs the feces into a break in the skin, swallows kissing bug feces, or rubs feces into the eyes may become infected with the disease.
Prevention and Control
Prevention of both BMSBs and kissing bugs are quite similar. Seal all entry points that might allow them to get inside, and minimize their available outdoor habitat. Also, removing clutter indoors is helpful, as it reduces the areas where they can go unseen.
While building construction and removal of habitat is valuable, sometimes that is not enough for your pest issues.
In such circumstances, contact your Orkin for a free inspection. They will form a science-based prevention and control program for your property. |
When I was writing about the crop fires in northern India last fall, it was obvious that 2016 was a pretty severe burning season. For several weeks, large plumes of smoke from Punjab and Haryana blotted out towns and cities along the Indo-Gangetic plain in satellite images.
But I didn’t realize just how severe the fires were until Hiren Jethva, an atmospheric scientist at NASA Goddard Space Flight Center, crunched the numbers. By analyzing satellite records of fire activity, he found that the 2016 fires were the most severe the region has seen since 2002 in regards to the number of fire hot spots satellites detected. In regards to the amount of smoke detected, the 2016 burning was the most severe observed since 2004. He used data from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor on Aqua and the Ozone Monitoring Instrument (OMI) on Aura to reach his conclusions.
Smoke and fire in northern India have become common in October and November during the last three decades because farmers increasingly use combines to harvest rice and wheat. Since these machines leave stems and other plant residue behind, farmers have started to use fire to clear the leftover debris away in preparation for the next planting.
For more details about how 2016 compared to past years, see the charts below, which Jethva prepared. His explanation for each chart is in italics.
Aqua Detected More Fires in 2016 Than During Any Year Since 2002
The satellite-based sensor MODIS can detect the signal of fire hot spots, also called thermal anomalies, because the signal measured by the sensor in space in the thermal infrared bands appears to be an anomaly compared to the signal emanated from the background land. Since its launch in 2002, the MODIS on NASA’s Aqua satellite has detected thermal anomalies such as wildfires, agricultural fires, and gas flares on a daily basis.
The yearly evolution of total number of fires and Fire Radiative Power (FRP) — the heat energy produced from these fires — detected over Punjab and Haryana showed 2016 to be an anomalous year, with the highest number of crop residue fires (18,707) and the highest FRP in relation to the fires in all other years over the region. In comparison to 2015, the total number of fire hot spots detected over the region in 2016 was 43 percent higher; the difference is 25 percent if the hot spot counts are averaged over the last five years, i.e., 2011-2015. A careful look at the time-evolution of fire counts also reveals an increasing trend in the total number of fires over the region.
Punjab Skies Were Unusually Smoky
These fires produced huge amounts of fine aerosol particles and trace gases, which can potentially impact the climate and degrade air quality drastically at ground level. NASA’s A-train sensors such as the Ozone Monitoring Instrument (OMI) on the Aura satellite and the MODIS on Aqua offer capabilities to measure the total amounts of airborne particles. The UV Aerosol Index (UV-AI), which is an excellent indicator of the column amounts of light-absorbing particles in clear as well as cloudy atmospheres, showed 2016 was the smokiest season on record since 2004.
Greener Fields and Larger Harvests Lead to More Fires
Many studies have shown that satellite measurements of the “greenness” of crop fields prior to harvest and crop yield after the harvest are strongly correlated. The normalized difference vegetation index (NDVI), which is derived from satellite measurements of radiation at the red and near-infrared light, is one useful measure of greenness. As seen in the charts above, there seems to be a one-to-one relationship in NDVI measured by the MODIS sensor on Aqua prior to harvest (September) and the total number of fire hot spots observed during harvest season (Oct-Nov). This suggest that the increase in the number of fires is likely related to increasing crop yields. |
Annual herb or perennial sub-shrub up to 20 cm tall, often forms a mat up to 2 m in diameter. Pale four-petalled yellow flowers.
Young plants initially have large bronze coloured foliage, but as the plants mature the leaves reduce in size and change to green. Depending on environmental conditions, plants can be annual or perennial. Growth is usually low and spreading but shrubs can grow up to 40cm.
Leaves: Distinctive leaves made up of two fleshy Y-shaped leaflets looking like butterfly wings. Leaves are ovate with a narrow end at their base. Leaves grow between 1-4cm long and are fleshy, a dull grey-green or green.
Flowers: Bright yellow in colour with four petals growing between 8-15 mm.
Fruits/seeds: Fleshy seed capsules which are ovoid-oblong shape with 4 angled edges.
What to Observe
- First fully open single flower
- Full flowering (record all days)
- End of flowering (when 95% of the flowers have faded)
- No flowering
- Fruits/seeds (record all days)
ClimateWatch Science Advisor
We expect plants to start shooting and flowering earlier in the year as a result of climate change warming the Earth. They may also start appearing in new areas, as warmer temperatures enable them to live in environments that were previously too cold for them. Help scientists answer the question: "How are our animals, plants and ecosystems responding to climate change?"
When To Look
Flowering occurs May-December
Note: ClimateWatch is looking for any changes in the timing of these events so remember to keep a lookout all year!
Where To Look
Grows on a variety of soils, predominantly loamy sands, often in mallee communities; widespread west of the Great Divide. Also in SE Queensland, NW Victoria, South Australia, southern Northern Territory and southern Western Australia.
Note: ClimateWatch is looking for any changes outside of their known ranges so remember to keep a lookout beyond these regions too! |
The ?law of diminishing returns? is one of the best-known principles outside the field of economics. It was first developed in 1767 by the French economist Turgot in relation to agricultural production, but it is most often associated with Thomas Malthus and David Ricardo. They believed human population would eventually outpace food production since land is an integral factor in that exists in limited supply. In order to increase production to feed the population, farmers would have to use less fertile land and/or increase production intensity on land currently under production. In both cases, there would be diminishing returns.
The law of diminishing returns ? which is related to the concept of marginal return or marginal benefit ? states that if one factor of production is increased while the others remain constant, the marginal benefits will decline and, after a certain point, overall production will also decline. While initially there may be an increase in production as more of the variable factor is used, eventually it will suffer diminishing returns as more and more of the variable factor is applied to the same level of fixed factors, increasing the costs in order to get the same output. Diminishing returns reflect the point in which the marginal benefit begins to decline for a given production process. For example, the table below sets the following conditions on a farm producing corn:
Number of Workers
It is with three workers that the farm production is most efficient because the marginal benefit is at its highest. Beyond this point, the farm begins to experience diminishing returns and, at the level of 6 workers, the farm actually begins to see decreasing returns as production levels decline, even though costs continue to increase. In this example, the number of workers changed, while the land used, seeds planted, water consumed, and all other inputs remained the same. If more than one input were to change, the production results would vary and the law of diminishing returns may not apply if all inputs could be increased. If this were to lead to increased production at lower average costs, economies of scale would be realized.
The concept of diminishing returns is as important for individuals and society as it is for businesses because it can have far-reaching effects on a wide variety of things, including the environment. This principle ? although first thought to apply only to agriculture ? is now widely accepted as an economic law that underlies all productive endeavors, including resource use and the cleanup of pollution.
Garrett Hardin effectively applied the theory in his 1968 article on the ?tragedy of the commons? in which he described the use of many common property resources, such as air, water, and forests, as being subject to diminishing returns. In this case, individuals acting in their own self-interest may ?overuse? a resource because they do not take into consideration the impact it will have on a larger, societal scale. Economists can also expand the theory to include limitations on common resources. The services that fixed natural resources are able to provide ? for example, in acting as natural filtration systems ? begin to diminish as contaminants and pollutants in the environment increase. Externalities such as these can lead to the depletion of resources and/or create other environmental problems.
However, the point at which diminishing returns can be illustrated is often very difficult to pinpoint because it varies with improved production techniques and other factors. In agriculture, for example, the debate about adequate supply remains unclear due to the uneven distribution of population and agricultural production around the globe and continued improvements in agricultural technology over time.
The challenge ? whether it be local, regional, national, or global ? is how best to manage the problem of declining resource-to-people ratios that could lead to a reduced standard of living. Widely used 'solutions' for internalizing potential externalities include taxes, subsidies, and quotas. Often, there are attempts to find ?bigger picture? solutions that focus on what many see as the primary causes, namely population growth and resource scarcity. Reducing population growth, along with increased technological innovation, may slow the growth in resource use and possibly offset the impact of diminishing returns. These potential benefits are a key reason why population growth and technological innovation are most often used in analyzing sustainable development possibilities.
Updated by Dawn Anderson
Diminishing Returns: World Fisheries Under Pressure This article, by the World Resources Institute, shows the problems fisheries have been experiencing over the past fifty years. Declining catch rates have threatened the industry, which knows all too well the problems with diminishing returns and overfishing.
Diminishing Returns Dr. Roger A. McCain, professor of economics at Drexel University, explains diminishing returns on his website and provides a further, in-depth look at the key concepts related to diminishing returns.
Law of Diminishing Returns Dr. Paul M. Johnson, from Auburn University, provides a thorough definition of the law of diminishing returns. He even includes garden and factory examples to illustrate his point.
Finding Energy Resources In this exercise, students learn about concepts of scarcity and energy resources. By dividing into teams and looking for and collecting beads representing energy resources, students learn how their value increases as the resources become scarce. [Grades 5-8]
EcEdWeb: Production and Costs This University of Nebraska at Omaha lesson allows students to learn about diminishing returns in the production process. A hands-on activity makes the concepts concrete by demonstrating how production factors influence output. [Grades 9-12] |
Welcome back to our JOINs course! In this part, we'll focus on joining tables with themselves!
Let's consider the following situation: we have information about employees and their supervisors in a single table, like this:
As you can see, there are four people. John is a supervisor to both Casper and Kate, and Casper is Peter's supervisor.
employee table stores data in a hierarchical structure: employees and their supervisors. Storing a structure like this in a table is quite common. Imagine you want to list each employee's name along with the name of their supervisor. That's where JOINing a table with itself comes in handy:
emp.name as employee_name,
supervisor.name as supervisor_name
FROM employee as emp
JOIN employee as supervisor
ON emp.supervisor_id = supervisor.id
When you join a table with itself, you must alias both occurrences of the table name. Moreover, the column names you refer to must be preceded by the alias of the table you want. This way, the database can distinguish which copy of the table you want to select a particular column from. |
Broadly speaking, phonological awareness (PA), is a sensitivity to the sounds and sound patterns of language. PA can be measured and can help form accurate predictions about reading ability.
PA is the ability to perceive and manipulate the following word parts:
In this self-paced mini-course, Lyn will define and demonstrate the above word parts, linking them to David Kilpatrick’s recent work on PA and to the underlying research that has gone into our understanding of the link between PA and literacy success. There will be some practical demonstrations of PA lessons using concrete manipulatives.
Course access is for six weeks from first login.
Participants will receive:
- A course handout including theoretical and practical materials
- Access to a video presentation on the subject of phonological awareness
- Access to forums and quizzes for collaboration and consolidation
- Access to a range of low or no-cost resources to support understanding and implementation of high quality phonological awareness activities in a classroom and clinical setting
COURSE LAUNCH: November 5th 2020 |
and activity sheets developed to cover key learning
competencies aligned with DepEd’s K to 12 Curriculum.
Designed in five (5) different subjects: Science, Math,
English, Filipino, and Social Studies, these workbooks
feature age-appropriate activities to challenge learners’
understanding of the lessons and help them put their
learnings into practice. |
Lets play counting…
Teach young children about numbers to 10 and lay the foundations of mathematical understanding using our wooden ten frames.
Ten Frames are perfect to use at home or in the classroom and offer an effective and simple way to learn about numbers. They help children “see” what they are learning and are ideal for exploring early maths concepts such as counting, number recognition, number bonds and so much more.
Our Ten Frame is made from sustainability sourced wood and has no oils, varnishes or paints added, just simple natural beechwood. A wonderful natural maths resource which can be passed through generations to aid maths learning within children.
Ten Frame learning and activities
- Learn to count 1 -10
- Develop number recognition
- Visualise numbers
- Explore number bonds to 10
- Broaden maths vocabulary
- Understand Place Value
- Develop addition and subtraction skills
- Teach children to subitize - an essential beginning step towards addition.
Dimensions: 17.5 x 7.5 x 2cm 7 x 3 x 0.75 inches
Toy Safety: Suitable for ages 3 years and over. |
The Highway, Brighton, East Sussex, BN2 4PA
A Compassionate Learning Community
Phonics is the term for the teaching and learning that develops reading and writing skills. We use the Primary National Strategy 'Letters and Sounds' to plan our phonics lessons. In this scheme, children progress through 6 phases of learning.
This begins in Nursery and contains lots of activities to develop children's speaking and listening skills. As it is so important for reading and writing that you can hear and speak the language well, this learning carries on alongside all the 5 following phases.
How can I help my child with Phase 1?
|I spy||Sound out words||Magnetic letters|
|Help your child to listen to words and begin to know the sound they start with. You can say the words really slowly to help them, like this "mmmmm-u-m". You could then begin to play games such as 'I Spy' using the letter sounds to make it fun.||Once your child has become more familiar with the beginning sounds in words, you can begin to sound out short words and see if they can tell you what they are, such as "m-o-p, c-a-t, p-i-n, d-a-d". You can make it like a game with points, and when they get really sure of themselves, they can sound out words for you to guess.||Magnetic letters on the fridge are so helpful for young children to get to know the letters of the alphabet. If you get a set for your fridge, you can start to get to know them by playing a matching game, such as giving them a letter 'm' and asking them to find 3 the same in 1 minute.|
Some Nursery children dip into this phase if they are ready to, but the main teaching begins in Reception year. Children learn 18 sounds and letters to begin with, and a set of 'tricky' words that can't be sounded out. Children do lots of work with the first 18 sounds, using them to spell out words and practise writing them.
How can I help my child with phase 2?
Throughout the Reception year you will get lots of support from the Reception to help your child with phonics. You will be given resources to use at home and homework to practise new sounds.
Try out the phonics play website for some fun phonics games to play.
Download your own phase 3 frieze to put up on the wall at home. |
Highlights a crucial cornerstone of character-perseverance-by showing real kids who achieved amazing results for themselves and their communities. Children need perseverance to master new skills in school, extracurricular activities, and life in general, but like any other character trait, it requires practice. Kids can develop perseverance by studying hard for tests, practicing a musical instrument at home, and refusing to give up when obstacles get in the way of their goals. Teaches that it's okay to fail. Both successful and unsuccessful people fail, but the successful ones learn from their mistakes and keep on trying. |
Published at Wednesday, 16 September 2020. Addition Worksheets. By Rosamonde Lacroix.
Most volumes begin with an explanation of basic arithmetic operations namely: addition, subtraction, multiplication, and division. Reference tables are supplied to provide clues for quick mental arithmetic and mastery of math facts. When ready to be tested, the student can select a drill, which has 10 questions and are selected from a database of number pairs for calculation. The Basic Level volumes use simple single digit numbers and the interactive math software at the Advanced Level uses mostly double digit numbers for math practice problems. Each drill is then scored and timed with the results saved. With the test records, students can follow their own progress and adults who may be supervising can monitor progress and assess if there are any learning issues that require intervention.
To illustrate, if you shape clay while it is still soft, it will be easier and flexible, similarly the child has young brain is easy to mold. We can start training them while they are still inside the mother has womb. And when the child is born his brain is ready to learn. Child fast development takes place during the first year after his birth. He starts to recognize movements, sounds, shapes, colors, and even counts. So if you develop your child earlier the result will be better. kindergarten will be too late
In all stages above, it is imperative to do oral and mental math. Without this skill, your child will be forever stuck with a pencil and paper. And the more work done on paper with a pencil, the more there is a chance for an error. And, your child will be stuck following steps instead of "just doing math." Doing oral and mental math makes a person very comfortable with math. Many adults have math phobia, due in no small part to not being able to do mental math. How to do it? While driving, cooking, shopping, sightseeing, almost any situation, you can drill your child on math. If a box costs $2, how much does 2 cost? How many horses do you see? Count the blue cars. Are their more boys than girls? Anything! Be creative. You can even get them to recite the times tables. This will also set the stage for an important skill they must master. Word problems! How many times have you heard people say they cannot do word problems? The oral problems you make up are just another form of word problems. If your child is used to doing math, without a problem written on paper, your child will not fear word problems. If you adamantly do the above, there is one last step. Sometimes it is out of your control, but do your best! Put your child in a class where there is an effective algebra teacher, and all math classes beyond sixth grade. You may find this hard, but the only one fighting for your child is you!
Any content, trademark’s, or other material that might be found on the Mysticadvices website that is not Mysticadvices’s property remains the copyright of its respective owner/s. In no way does Mysticadvices claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner.
Copyright © 2020 Mysticadvices. All Rights Reserved. |
“A team of researchers from MIT has developed an artificial intelligence system that can fool human judges into thinking it’s a person when it comes to drawing unfamiliar letter-like characters.
You can think of the experiment, detailed in the new issue of Science, as a kind of visual Turing Test. The software and a human are shown a new character—something that looks like a letter, but isn’t quite. (You can see some examples in the image above.) Then, they were both asked to produce subtle variations on character. In other tests, the human and computer were instead supplied with a series of unfamiliar characters and asked to produce a new one that fit with the batch.” said gizmodo.com
“A team of human judges was then asked to work out which results were produced by computer, and which by humans. Across all the tasks, the judges could only identify the AI’s efforts with about 50 percent accuracy—That’s the same as chance.
(Think you can do better than the judges? In the image at the top, a panel of nine shapes was produced by either the AI or a human for each character. Can you identify which panel was generated by a machine? The answers are at the bottom of the page.)
It may seem like a strange experiment, but it has some profound implications. Usually, you see, AI systems have to be trained on massive data sets before they can perform a task. Unlike computers, humans can carry out what the researchers refer to as “one-shot learning” with comparative ease.” said gizmodo.com
The researchers suggest that they’ve created an AI that can do the same, using a technique called Bayesian Program Learning. That identifies and learns characters with an approach that’s similar to the way humans understand concepts. The team explains how the software works:
Whereas a conventional computer program systematically decomposes a high-level task into its most basic computations, a probabilistic program requires only a very sketchy model of the data it will operate on. Inference algorithms then fill in the details of the model by analyzing a host of examples.
Here, the researchers’ model specified that characters in human writing systems consist of strokes, demarcated by the lifting of the pen, and that the strokes consist of substrokes, demarcated by points at which the pen’s velocity is zero.
Armed with that model, the system then analyzed hundreds of motion-capture recordings of humans drawing characters in several different writing systems, learning statistics on the relationships between consecutive strokes and substrokes as well as on the variation tolerated in the execution of a single stroke.
The results seems to speak for themselves—and the researchers aren’t too shy about hiding their excitement. “In the current AI landscape, there’s been a lot of focus on classifying patterns,” says Josh Tenenbaum, one of the researchers, in a press release. “But what’s been lost is that intelligence isn’t just about classifying or recognizing; it’s about thinking. This is partly why, even though we’re studying hand-written characters, we’re not shy about using a word like ‘concept.’ Because there are a bunch of things that we do with even much richer, more complex concepts that we can do with these characters. We can understand what they’re built out of. We can understand the parts.”
Answer: The grids produced by AI were, by row,: 1,2,1;2,1,1. |
Sleep deprivation impairs many physiological functions, including immune regulation and metabolic control. Rats that are kept awake artificially die after two week. Upon the on-set of total deprivation, the body temperature rises, followed by a fall. Sleep-deprived rates get skin lesions. The rats "eat voraciously, but lose weight and develop malnutrition-like symptoms". Hormone levels change to mimic a state of extreme stress.
These changes are linked to eating and body weight, and as the time of deprivation continues, appetite increases, weight loss continues, and body temperature continues to fall even as the animals try to keep themselves warm. By the time they die the rats have lost a lot of weight.
Humans deprived of sleep in laboratory conditions report, even after a one night, feeling cold and hungry. It is worth noting that during the REM stage of normal sleep, the body does not thermally regulate. (More on thermoregulation.) Scientists have determined that in human sleep deprivation the decline in body temperature is 0.5° C. There is an increase in white blood cell counts and a general slowing of bodily functions.
Sleep deprivation also leads to systemic inflammation at a low level. This type of low-level inflammation is similar to that found in cardiovascular disease, diabetes, etc. and there could be a connection.
Sleep and emotion interact as most psychiatric conditions are associated with sleep disorders. There are suggestions that the even mild sleep deprivation makes emotionally healthy people cranky.
Does sleep deprivation make you crazy? No. Not in a clinical sense does sleep deprivation lead to schizophrenia or mental illness. Visual misperceptions are common among overly sleepy people, but these are not hallucinations or waking dreams, as commonly believed, and auditory hallucinations do not occur in sleep deprived people any more than in rested people. (An article in New Scientist magazine suggests that bad sleep habits can indeed cause mental illness, or something like it. If this is true, sleep problems would be a cause, in addition to a symptom of mental illness.)
Some researchers think that even short-term sleep loss causes glucose intolerance in the body and hence has the same effect on the body as a pre-diabetes state.
A study published in the Journal of Neuroscience described the reversal in sleep deprivation effects in sleep-deprived monkeys by administration of the brain chemical orexin. Scientists gave the monkeys orexin by either injection to the blood stream or through a nasal stray. The monkeys' cognitive skills improved. It is not clear whether this will help lead to a treatment for humans.
Scientists have also found that flies with extra dopamine receptors can better withstand sleep deprivation.
Evidence for the brain's need to sleep comes from work in sleep-deprived rats where scientists found sections of the brain went into a temporary sleep-like state. Sections of the cortex went a state showing brainwaves like those seen in Stage 1 and 2 sleep, "seemingly at random" according to a report. (http://www.nih.gov/researchmatters/may2011/05022011sleep.htm)
One problem with sleep deprivation experiments is that the subjects are well protected, made comfortable, and at a low stress level, while real people with sleep deprivation undergo daily lives that may contain stress. Although total sleep deprivation (no sleep) happens in extreme circumstances, much more common in day-to-day life is chronic and partial deprivation.
Migrating birds flying for weeks at a time and newborn whales (and their nursing mothers) can forgo sleep altogether without negative effects or need for catching up. While there is room for flexibility in humans, this type of suspension of the need to sleep has not been observed. |
DCN - Data-link Layer Introduction
Data Link Layer is second layer of OSI Layered Model. This layer is one of the most complicated layers and has complex functionalities and liabilities. Data link layer hides the details of underlying hardware and represents itself to upper layer as the medium to communicate.
Data link layer works between two hosts which are directly connected in some sense. This direct connection could be point to point or broadcast. Systems on broadcast network are said to be on same link. The work of data link layer tends to get more complex when it is dealing with multiple hosts on single collision domain.
Data link layer is responsible for converting data stream to signals bit by bit and to send that over the underlying hardware. At the receiving end, Data link layer picks up data from hardware which are in the form of electrical signals, assembles them in a recognizable frame format, and hands over to upper layer.
Data link layer has two sub-layers:
Logical Link Control: It deals with protocols, flow-control, and error control
Media Access Control: It deals with actual control of media
Functionality of Data-link Layer
Data link layer does many tasks on behalf of upper layer. These are:
Data-link layer takes packets from Network Layer and encapsulates them into Frames.Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up signals from hardware and assembles them into frames.
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed to be unique on the link. It is encoded into hardware at the time of manufacturing.
When data frames are sent on the link, both machines must be synchronized in order to transfer to take place.
Sometimes signals may have encountered problem in transition and the bits are flipped.These errors are detected and attempted to recover actual data bits. It also provides error reporting mechanism to the sender.
Stations on same link may have different speed or capacity. Data-link layer ensures flow control that enables both machine to exchange data on same speed.
When host on the shared link tries to transfer the data, it has a high probability of collision. Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared media among multiple Systems. |
Point Pelee National Park of Canada
Bird Numbers at Point Pelee Arrival of Migrants
The total number of bird species recorded at Point Pelee is 372, of which at least 340 of these species have been recorded during the spring migration period. The stream of birds in the spring is not a steady flow from the south. The birds usually arrive in intermittent waves, a pattern unique to eastern North America. In some years these are well marked but, in others the fluctuations in numbers and variety is so meagre that a wave in difficult to detect. A "wave" occurs as a result of a warm weather front advancing from the south or southeast meeting a cold weather front from the north or northwest. Two situations will cause the birds to descend. One is when the two fronts meet at ground level. The other is when a warm front in which migrating birds are flying overrides a cold front. The rising warm air becomes cooler with the increasing altitude until it is finally too cold for the birds and they descend.
If these nocturnal (night-time) migrants find themselves over Lake Erie near sunrise they must continue onwards or drown. After flying perhaps hundreds of kilometres in one night, it is this extra 30 to 40 kilometres across the lake that really demands their last strength. This explains why exhausted birds are sometimes found at the tip of the Point. A similar situation, but on a larger scale, occurs when migrants cross the 800 to 1000 kilometres of the Gulf of Mexico. If the weather is good they continue inland in one continuous flight without stopping, but with a north wind and rain they descend on the coast in great numbers, often in an exhausted state.
What everyone hopes for in the spring is a major wave with a "grounding" of migrants. An incredible grounding of migrants occurred on May 9 to 12, 1952.
Estimates of some of the birds present included 1 000 black-and-white warblers and 20 000 white-throated sparrows. Another occurred on when 3 000 northern orioles were engaged in visible reverse migration off the Tip, while the day's tally for chimney swifts was 900. On May 15, 1978, in just the Tip area of the park, there were 80 yellow-billed cuckoos, 70 eastern wood-pewees, 250 scarlet tanagers and much more.
Other "big days" for certain species are tundra swan (2500), red-breasted merganser (100 000), whimbrel (500), northern flicker (250), bank swallow (12 000), white-eyed vireo (50), hooded warbler (18) and kentucky warbler (13). |
A robotic arm, sometimes referred to as an industrial robot, is often described as a ‘mechanical’ arm. It is a device that operates in a similar way to a human arm, with a number of joints that either move along an axis or can rotate in certain directions. In fact, some robotic arms are anthropomorphic and try and imitate the exact movements of human arms. They are, in most cases programmable and used to perform specific tasks, most commonly for manufacturing, fabrication, and industrial applications. They can be small devices that perform intricate, detailed tasks, small enough to be held in one hand; or so big that their reach is large enough to construct entire buildings.
Robotic arms were originally designed to assist in mass production factories, most famously in the manufacturing of cars. They were also implemented to mitigate the risk of injury for workers, and to undertake monotonous tasks, so as to free workers to concentrate on the more complex elements of production. These early robotic arms were mostly employed to undertake simple, repetitive welding tasks. As technologies develop, in particular robotic vision and sensor technology, the role of robotic arms is changing. This article provides a brief overview of Robotic Arms in manufacturing.
History of Robotic Arms in Manufacturing
It is widely understood that the first programmable robotic arm was designed by George Devol in 1954. Collaborating with Joseph Engelberger, Devol established the first robot company, Unimation in 1956, in the USA. Then in 1962 General Motors implemented the Unimate robotic arm in its assembly line for the production of cars. A few years later, a mechanical engineer at Stanford University, Victor Scheinman was developing a robotic arm that was one of the first to be completely controlled by a computer in 1969. This industrial robot, known as the Stanford Arm was the first six axes robotic arm and influenced a number of commercial robots that followed. A Japanese company, Nachi, developed their first hydraulic industrial robotic arm in 1969 and after this a German firm, Kuka, pioneered the first commercial six axes robotic arm, called Famulus, in 1973.
Predominantly, these robots were utilised for spot welding tasks in manufacturing plants but as technology developed, the range of tasks that robotic arms could perform also expanded. The advances in technology includes the increasing variety in end-of-arm tooling that has become available. This means that Robotic arms can perform a wide range of tasks beyond welding depending on the tools that are attached to the end of their arms. Current innovations in end of arm tools include; 3D Printing tool heads, heating devices to mould and bend materials, and suction devices to fold sheet metal. You can read more about advances in end of arm tooling in the article on designrobotics.net, Design Robotics in Architectural Fabrication.
Advancements in Sensors and Vision Robotics
A very important advancement in the use robotic arms is the development of sensors. Victor Scheinman developed the Silver Arm in 1974, which performed small-parts assembly using feedback from touch and pressure sensors. Although early robots had sensors to measure the joint angles of the robot, advances in robotic sensors have had a significant impact on the work that robots can safely undertake. Here is a summary of some of these sensors and what affordances they provide.
- 2D Vision sensors incorporate a video camera which allows the robot to detect movement over a specific location. This lets the robot adapt its movements or actions in reference to the data it obtains from the camera.
- 3D Vision Sensors are a new and emerging technology that has the potential to assist the robot in making more complex decisions. This can be achieved by using two cameras at different angles, or using a laser scanner to provide 3 dimensional views for the robot.
- A Force Torque sensor, helps the robotic arm to understand the amount of force it is applying and allows it to change the force accordingly.
- Collision Detection sensors provide the robot an awareness of its surroundings.
- Safety Sensors are used to ensure people working around the robot are safe. The safety sensors alert the robot if it needs to move or stop operating if it senses a person within a certain range.
There are many other sensors available which include tactile sensors or heat sensors. The benefits of these different types of sensors for robotic arms is that they provide the robot with detailed and varied information from which it can make decisions. The more information the robot has available to it, the more complex decisions it can make. Ultimately the purpose of these sensors is to help make working environments around robots safe for people.
Design Robotics Research Project
Vision technology makes working with and alongside robots safer, but it also assists robotic arms is making complex decisions for manufacturing. This means developing the capability for mass customisation manufacturing, which means that they can create high volumes of bespoke and customisable items for mass consumption while keeping fabrication costs low.
The Design Robotics projects is researching how vision technology and robotic arms can improve manufacturing outcomes for small to medium enterprises who are fabricating bespoke, one off items. Working with Urban Art Projects, this is research is being tested through the manufacture of large scale, unique public art projects.
Evolution of Robotic Arms: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4247431/
History of the Kuka Arm: https://www.kuka.com/en-au/about-kuka/history
History of Nachi: http://www.nachirobotics.com/company-information/natchi-history/
Robots and their Arms: http://infolab.stanford.edu/pub/voy/museum/pictures/display/1-Robot.htm
History of the Robotic Arm: source: http://iptmajorprojectjacobheffernan.weebly.com/history-of-the-robotic-arm.html
Seven Types of Industrial Robot Sensors: https://blog.robotiq.com/bid/72633/7-Types-of-Industrial-Robot-Sensors
Working alongside robotics (interview with Peter Corke) http://media.theaustralian.com.au/poweringaustralia/robotics/index.html |
Mahatma Gandhi was one of the most famous men in India. He was born on October 2, 1869 in Probandar, India. At age thirteen, Gandhi married Kasturba and started a family of four children and his wife. Gandhi then went to London, England to study law. In 1891 he returned to India to practice law in courts but failed when he was unable to speak in front of a judge. In 1893 Gandhi went to South Africa on a year contract. During this period in time, the British had South Africa under their control. Gandhi stayed in Africa twenty-one years trying to liberate the abused Africans using nonviolent methods.
In 1915, Gandhi moved back to India where the British had taken over, just like in Africa. He led campaigns that started the Indian Nationalists Movement. Gandhi thought that for a just cause it was honorable to go to jail, and he did. For seven years he was in jail for his political campaigning, two of them leading the revolt from inside the jail. In the year 1947 India gained their independence from The British Empire by nonviolent methods.
After the British left, the country of India turned against itself as two major religions split into two rioting groups. The land of India soon split and Pakistan was made. In 1948 Mahatma Gandhi went on a fast to stop the fights and in less than five days they stopped fighting. Twelve days later Gandhi was in a market as a young man bowed to him and shot him three times. Rammer, an Indian word for God, was the last thing Gandhi said before he died. Gandhi's body was taken out to sea; Burned and the ashes were thrown into the Indian Sea. People say that them man didn't like Gandhi's methods and wanted the violence. Gandhi was one of the most unique people in the world; he stood out when no one else would and that is what makes him a hero |
The students of 8-9 years did a comic using the ICT. To make this activity countries joined in two ( for instance Greece-Spain, Italy – Portugal…) and a group of three. We think of a common topic (each group of countries their own) and students researched the topic on the Internet. After we agreed what country would start the comic story and who ended it up ,on the format and approximate number of bullets. Once completed was sent to our colleagues in Poland and they did the final product : a digital book with all comics. Our aims were:
- To do easy stories using comic format.
- To know the characteristics of the comics.
- To work cooperatively comic.
In Spain this activity was carried out by students 8-9 years old. This activity allowed us to work all competences. Our students have understood what the comic is, design stories in comic format (write,script and image creation) and to work together with students from another country. They also practiced reading and writing skills in both languages, English and Spanish. The experience was very rewarding and motivating for them. They enjoyed the activity, they used their imagination and creativity ,they knew a little each other country culture and worked values of respect, tolerance, friendship, cooperation. The aims were achieved 100%. |
Eczema is a generalized term that is used for various inflammatory skin conditions. It is also called as “Dermatitis” which means superficial inflammation of the skin (epidermis) that can be acute, chronic and recurring. Eczema is characterized by various reactive patterns of the skin, as discussed below
Symptoms of Eczema
• The rash appears later and is red and bumpy.
• The rash itches or burns.
• If it is scratched, it may ooze and become crusty.
• In adults, chronic rubbing produces thickened plaques of skin.
• Some people develop red bumps or clear fluid-filled bumps, when scratched, add wetness to the overall appearance.
• Painful cracks can develop over time.
• Although the rash can be located anywhere on the body, in adults it is most often found on the neck, flexures of the arms (opposite the elbow) and flexures of legs (opposite the knee). Infants may exhibit the rash on the torso and face. As the child begins to crawl, the rash involves the skin of the elbows and knees. The diaper area is often spared.
• The itching may be so intense that it interferes with sleep.
• Dryness, flakiness, heat are associate symptoms.
• In infants, eczema typically occurs on the forehead, cheeks, forearms, legs, scalp, and neck. Affected areas usually appear very dry, thickened, or scaly. Some times there will be hyper pigmentation. In children and adults, eczema typically occurs on the face, neck, and the insides of the elbows, knees, and ankles.
• Chronic scratching causes the skin to take on a leathery texture because the skin has thickened (lichenification).
Role of Homoeopathy in Eczema:
Homoeopathy can help in eczema by decreasing the susceptibility to various allergens and irritants. The extent of results depends upon the type of eczema, type of lifestyle improvement the patient is able to make. Strong family history of atopy, asthma or allergy may become a hurdle in the response to the treatment of eczema in the initial phase of the illness but these influences can be reduced with homoeopathic medicines over a period of time. Constitutional homoeopathic treatment approach is the best way to treat eczema permanently. Homoeopathy has been found useful in all types of eczema. |
The Engineer’s Kitchen: Molecular Gastronomy
Engineering The Way We Cook
Most people in developed countries still prepare and cook their food just as their ancestors did. While it seems like there is a new, must have kitchen gadget coming out every time the television comes on, the core of our cooking tools have been around for centuries. Ones kitchen still contains a variety of pots and pans, whisks and colanders to name a few of these tools. So if the ingredients and the tools have not changed all that much, why has the food we create today, so much different from days past?
Molecular gastronomy, or the science behind the cooking, is very much a large contributor to how we have changed the way we prepare food today. By understanding the chemistry and physics of how food is cooked, we can manipulate and alter food preparation to create new tastes and textures. While it is not necessarily a new and exciting discipline, molecular gastronomy has continued to become increasingly popular among many of the foodies today. The term itself originally referred only to the science, but has since been used as a blanket term to include new cooking styles.
What we see happening today in relation to cooking techniques, is a new wave of chefs applying science and physics to the repertoire. By using new tools and techniques, as well as a slew of new ingredients, they can create almost anything imaginable. Listed below are some of the techniques used today.
- Flash Freezing
By utilizing liquid nitrogen, the chefs are able to quickly freeze the outsides of certain food while sometimes leaving a liquid center. Another tool is known as the anti-griddle. A metal surface kept at about -300F by pumping refrigerant through a compressor to maintain the temperature, which can almost instantly turn liquids into solids.
Spheres are created by making “liquid foods”, such as purées from peas or fruits. The purées are then mixed with sodium alginate and dropped into a bath of calcium chloride to create spheres that look and feel like caviar.
- Foams / Froths
These are simply sauces that are turned into a froth utilizing a whipped cream style canister with a stabilizer such as lecithin. Lecithin is a fatty substance that occurs in animal or plant tissue.
- Edible papers
Homaro Cantu of Moto restaurant was the innovator behind making edible paper from soybean and potato starch. He then uses an ink jet printer that has been adapted to use inks made from fruit and vegetables.
As more attention has been paid to the sciences behind cooking, many of the rules have been proven to be false. There are many “do’s and don’ts” that are not applicable anymore, but still show up in just about every recipe or cookbook. A lot of these myths came from pointless tasks that get added into recipes and handed down, generation after generation. While some did have a purpose at one time, others have just been considered the norm, and been kept alive. A couple of examples of how the science has debunked the myth are:[ii]
- Adding oil when coking pasta to keep it from sticking.
Since oil is less dense than water, it will simply float along the top before it gets anywhere near the pasta. On the other hand, by adding a weak acid such as lemon juice, the breakdown of the starches is slowed allowing for slightly firmer pasta that will not have the tendency to stick.
- Adding salt to the water when cooking green vegetables will help them maintain a bright color.
Some of the divalent salts that people used about 150 years ago might have had this effect by fixing chlorophyll’s bright green color, but the salts used today are monovalent and would not make such a difference.
As the cooking craze continues to grow throughout the kitchens in the world, I believe the science and technology behind molecular gastronomy will only become more relevant as we move forward into the 21st century. With some many advances, it will be amazing to see what is yet to come. |
Introduction L01: Principles of basic economy L02: Applying economic principles L03: Applying analytic skills L04: Development of Independent Learning and Group Work Skills L05: Developing communication skills Conclusion References
The macroeconomics includes the economy of the nation and also the global economics. Moreover, the macroeconomics is a wide area for the study for research with the practice to earn knowledge regarding the changes and variations in the income of the nation. The research is carried on the different learning outcomes which are based on the principles of the basic economy and its explanation about the Wolf of Wall Street movie. The other learning outcome includes the principles that are used to solve the problems in the economy. The third learning outcome helps in the development of the analytical skills to solve the problems regarding economics. Furthermore, the development of independent learning and teamwork is discussed to solve the problems as shown in the Wolf of Wall Street Movie and lastly, the use of verbal and non-verbal skills that are used to communicate for the basic understanding of the economic principles.
The study of social science that deals with the purpose of analysing and explains the principles of production, consumption and sharing of goods and services is termed as economics. The principle of basic economics is to teach people to make right choices during the time scarcity and to cope with pressure (Rios, McConnell & Brue, 2013). Furthermore, this is the study of economics where proper use of resources under scarcity is taught. There are two types in which the basic economic principle is distributed namely microeconomics and macroeconomics. Microeconomics is the part that deals with the personal choices such as business and household during the time of scarcity and also economic consequences as a result of these decisions. On the other hand, macroeconomics examines the entire economy as a whole such as employment, national economic condition and inflation. The principles of the basic economy state that every commodity comes with a cost and the choices people make are only for good reasons due to which the economy of a country grows. The incentives to the people matters because these create a social-economic system in a country influencing their choices (Fisher, Kelman & Nan, 2013). The economic principles also explain not only the country but also that people also get benefited from the voluntary trade. The economic condition of a country for examples the values of goods and services is directly affected by the choices that people make. The point of examining economic principles is to comprehend the choice procedure behind apportioning the current accessible assets, the requirements constantly boundless however assets being restricted.
To understand the basic principles of the economy and solve the problems residing within the economy application of economic principles are explained below.
Adopt different policies: Macroeconomics affects everyone in a country in some way or another. During the time of such scarcity people will lose a job and get unemployed, there will be inflation which will reduce the value of the money (Demyanov & Pallaschke, 2013). Mass unemployment will cause social fragmentation. So it is very necessary for the people to adopt different policies to save themselves from unemployment.
Economic problems are the scarcity of resources which are insufficient and finite but are available to satisfy the wants and needs of the people in society. The problem is to allocate the items that are low in number to produce many useful goods that are used by the people living in the society ("Principles of Economics |", 2017). By applying the analytic skills as shown in Wolf of Wall Street and making development here are five ways in which the economic problems can be solved.
Examine and find potential solutions: To find a possible solution is very necessary because when macro economy happens, then not only poor but middle class and rich too gets strangled. So during the time it is important for everyone to think and buy products carefully without using all their saving on the goods that are not available in the market.
The independent learning to solve the economic problem as the knowledge is gained by the efforts of the people and the skills are developed to analyse and also to evaluate the gained knowledge critically. This can be proved helpful to solve the economic problems. This also helps in solving the control to achieve the desired goals (Taylor, 2013). Whereas, the group work skills can provide both advantages and disadvantages while solving the problems. As the group work involves different people and this means a different type of information. The general problems in the macroeconomics are to deal with the production and to overcome these problems. The group work explains the different roles of the people in the group along with the behaviour towards the problem (Nicholson & Snyder, 2014). The roles related to the task are to be maintained and then the negative or poorly executed roles are to be identified and discussed to improve them. This situation is, therefore, not termed in the independent learning skills because the person is carrying the research all alone and gaining knowledge. Further creating small objectives to achieve them and improve the economic-based problems. Additionally, it can be stated that developing both the skills which includes the independent as well as the group work skills is a better way to solve the economic problems. The group work also comprises of some disadvantages that are minor or major conflicts which can at some point prove harmful for the group in the achievement of the goal. Moreover, these issues were presented in the Wolf of Wall Street movie ("The Wolf of Wall Street Official Trailer", 2017).
The verbal and non-verbal communication skills are an important part of the development of the economy. This was well described in the movie, Wolf of Wall Street. The increase in the communication is necessary to improve the confidence as the questions need to be answered to let the ideas regulate (Dahl, 2017). The importance of communication skill is also that it is important to provide attention to the fullest for understanding. The Movie also showed the encouragement to listen and not just to hear which indirectly supports the speaker and also motivates them. Judgments and reacting to an improper behaviour is to be reported to the leader of the team or the entire organisation as early as possible. Another important skill that is to be calm in pressure situations and also to check for the accuracy as this helps in showing the attentiveness towards the speaker. Non-verbal skills are an important factor to encourage understanding the economic principles. Written letters or bulletin boards can hold up a paper which states all the principles necessary for the development of communication skill as it is necessary for the explanation of the economic principles. Moreover, there are many different methods to encourage and explain the economic principles falling under the category of verbal and non-verbal communication skills (Menon, 2014). The movie, Wolf of Wall Street, was completely based on the skill of verbal communication and shared the different principles to help and move the company forward regarding the economy.
It can be concluded from the above report, that the macroeconomics is an important part of the study in the economics of the country or an organisation. The help of the case study or the movie, Wolf of Wall Street it is made clear to overcome the problems in the economy. Different methods are provided to understand and explain the problems and their solution. The learning outcomes are made to understand the situations in the real-life experience. Furthermore, every learning outcome is specified for the different purpose of understanding.
Assignment Writing Help
Engineering Assignment Services
Do My Assignment Help
Write My Essay Services |
1. Acoustic Poem - Level 1
2. Advice Letter - Level 1
An Acrostic Poem is a poem where the first letter of each line spells a word and each line gives details and helps explain the chosen word.
Your Task: Use your notes and the textbook to create an acrostic poem for the term your teacher assigns. If you are choosing your own term it must be no less than 7 letters.
Poems should show
Explain something from the unit, and
How do we learn about the past?
Investigating ancient ruins,
Translating foreign languages,
Observing human behavior, and
Reading primary sources, but we can’t time travel…
Your task: Write a personal letter to someone in history giving them advice on how to deal with a historical situation. Your letter should include any key terms of people involved with the event in some way.
1. Address your letter properly. “Dear Charlemagne,”
2. Briefly explain the situation. (1 paragraph)
3. Give advice on how the person can deal with the problem. (1 paragraph)
4. You are writing this as if you were giving a friend advice, try to be helpful!
5. Close the letter properly. “Sincerely, your friend”
3. Character Clash - Level 2
4. Dear Diary - Level 2
Your Task: Complete the following sentences for each person or group of people assigned by your teacher (You will be making more than poem). When writing these poems pay special attention to the different points of view held by each person or group.
Person/Group 1 Person/Group 2
I am / We are… I am / We are...
I / We believe… I / We believe...
I / We wonder... I / We wonder...
I / We see… I / We see...
I / We hear… I / We hear...
I / We feel… I / We feel...
I / We touch… I / We touch...
I am / We are… I / We are...
I / We worry about… I / We worry about...
I / We cry because… I / We cry because...
I / We understand… I / We understand...
I / We say… I / We say...
I / We dream… I / We dream...
I / We hope… I / We hope...
I am / We are… I / We are...
Your task: Write a series of diary entries as if you were a person living through a historical event. Keep in mind this is not a report on the event but the thoughts and feelings of someone living through it. Your diary should include any key terms or people involved with the event in some way.
1. If one has not been assigned, choose what type of person you will be (rich, poor, king, peasant, soldier, etc)
2. Write a one-paragraph entry about the beginning of the event. Include the date. Briefly described how the event began (if your person would know).
3. Write a one-paragraph entry about the middle of the event. Include the date. Mention whether things are better than the beginning or not.
4. Write a one-paragraph entry about the end of the event. Include the date. Describe your character’s feelings about the event now that he/she has lived through the whole thing.
5. Game On - Level 1
Your task: Create a game that could be played in the civilization we just studied. Be sure to think about what materials would be available and the geography of where you would play the game. There’s a reason ice hockey is popular in Canada!
1. At the top of your paper give your game a name and write where it will be played.
2. Next, write out a basic description of the game and the basic rules. Make sure to include all the rules needed to play the game. (Who goes first? How do you keep score? When does it end? Etc.)
3. Describe the equipment you would need and what it would be made out of.
4. Draw a quick sketch of your game being played. This can be just a very simple drawing.
Lizard Darts (African Desert)
A very simple game you can play with your friends. Just find a lizard in the sand and throw it tail-first at a tree. If it sticks in the tree you get a point!
· 10 minute time limit (or quit when it gets too hot)
· Everyone plays at the same time, as soon as you catch a lizard you can throw it. No need to take turns.
· Each lizard that sticks is worth 1 point.
· If you throw it head-first you are disqualified (that would just be mean!)
· No special equipment is required. You can make gloves out of… well, nothing I guess, you’re in the desert. Just hope the lizard doesn’t bite!
6. HiStory - Level 3
Your task: Write a fictional short story that includes vocabulary and historical events from a given unit. This story can be set in any time period and be about anything (if you want to write about time traveling vampire-robot-ninja-pirates, go for it!) but must be a complete story with setting, characters and conflict.
Your HiStory must include:
o At least 5 paragraphs including a beginning, middle and end.
o At least 10 key terms or ideas from the historical unit.
o References to these terms that help your read understand what they are
and what they mean. (Naming one of your characters “Constantine” does
not count as using the key term!)
o All the key elements of a story (characters, setting, conflict, detail)
o A general storyline that makes sense (fantasy is fine but the story itself
should have a solid plot)
o A rough draft
o A final draft done in ink with proper spelling, grammar and punctuation.
7. Instant Messages - Level 2
8. Introduction Speech - Level 2
Your task: Write out an instant message (IM) conversation between two historical figures about a given topic. It should be written in netspeak (lol, brb, ttyl, and smilies for example) and should show at least to some degree the opinions of each figure.
Your IM conversation must:
-include online names for each figure
-have at least 10 lines from each figure. (numbered)
-discuss a historical event or topic in detail (“omg that’s 2 bad” isn’t detail!)
-give a sense of each figure’s opinions about the event. (It should read like a conversation not like one voice arguing with itself.)
-use netspeak when appropriate.
-use smilies when appropriate.
1. ImTheMan (Cassius): Yo Brutus, so are you in with us or not?
1. EtTuBrute (Brutus): I dunno C. Caesar’s like a dad 2 me. :-(
2. ImTheMan: IDC if he really iz ur dad. We’ve been over this. He’s gotta go before he takes over all of Rome. ><
2. EtTuBrute: Ya but do you rlly think he’ll do that? He’s been so good at fighting off Rome’s enemies.
3. ImTheMan: Dude that’s the prob. The people are all like crazy in luv with him and they’ll do anything he sez.
3. EtTuBrute: Ya so wut? That doesn’t mean we have 2 kill him rite? o.O
4. ImTheMan: Of course we do. We cant just like vote him out or something the peeps will be all like “no you didn’t!” and they’ll be mad at us.
4. EtTuBrute: Ya, I guess ur right we gotta do this. This aint about us though right C? This is about the people of Rome. =D
5. ImTheMan: Sure w/e Brutus this is about keeping the Senate in power wich helps the people. They need us!
5. EtTuBrute: k, so the plan is to all get him when he stands up ya?
6 ImTheMan: Yep, right there in the Senate so the people know this was about keeping the Republic. Well all do it too so they won’t be all mad at just one of us.
Continue until you have 10 lines for each!
Your task: Often when a famous person is about to give a speech they are first introduced by someone else with a short speech. Write and recite a short introduction speech for a famous figure.
Your speech must:
-Be about 1 minute in length (approximately 2 paragraphs written should do it).
-Include key details and information about the figure.
-Be inspiring (if you are introducing Constantine you’d likely want to leave out the fact that he was suspected of killing his wife.)
-Make the person sound important.
Ladies and gentlemen of Rome, it is my pleasure to introduce to you today a man who rose up in our time of need. This is a man who can and will save the Roman Empire. He is not just a squabbling senator; no, he is a warrior! What other man do you know who killed an elephant?! This is the man who led the Roman army to victory time and time again. He conquered Gaul and grew our mighty land all the way to the Atlantic Ocean! This is the man who out of the goodness of his heart adopted his nephew Octavius as his own son.
Today I am proud to introduce the only Roman leader who truly cares about you – the people. He got rid of the awful, wasteful Republic and replaced it with the new and mighty Empire and the one person who could hold it all together. Ladies and gentleman, here he is, JULIUS CAESAR!!!
9. Judgment - Level 1
Your task: Create a T-chart to evaluate whether a historical figure is good or bad.
1. Make a T-chart listing at least 7 things the person said or did and list them as either good or bad. You must have at least 1 thing in both columns (do not just list 7 good things or bad things.)
2. Write one paragraph explaining whether this person did more good or more bad. Give reasons from your chart and be sure to explain.
10. Letters Home - Level 3
Your task: In the past people rarely left the village in which they were born. They would live, work and die without ever travelling more than a couple miles. Imagine you were one of the lucky few people from the unit who was able to travel away from home. Maybe you were going off to war, to visit a major city or were going on a trade journey. Write a series of letters back to your family to tell them about what you see and what happens to you over a period of time.
1. Write six letters to your family about different things you see and experience on your journey.
a. Each letter must be at least a complete paragraph and each on their own paper.
b. Each should be formatted like a proper letter (dear X, sincerely Y, etc.)
c. Each of the six letters should cover a different topic from the culture, here are some ideas you might use:
i. A great building you see.
ii. An invention you might come across.
iii. A religious festival or ritual you get to participate in.
iv. A class of person you’ve never seen before (like a noble or warrior.)
v. An interesting geographic feature like a specific mountain or river.
vi. A battle you participate in.
vii. A game or activity you witness or participate in.
d. Remember, you are seeing these things for the first time and your family has never seen them so you’ll need to use great detail.
2. Create a cover sheet that includes your name and include it with your letters.
11. Play - Level 3
Your task: Write and act out a 3-5 minute play about a specific historic event. You will include a background for scenery (either drawn or projected) and any necessary props.
1. Write a script that includes all lines of dialogue, narration, scene descriptions and stage directions.
a. Your play must be about a specific event in history - get approval of your idea from your teacher before you begin writing.
b. It cannot include any action scenes. You may set up your scene to take place immediately before or after a battle but do not waste my time running around the room with paper swords.
c. Some jokes are ok but your focus should be on providing information and feeling.
d. When you perform it must be 3 to 5 minutes long - that usually means 3-5 pages of a written script.
2. Create the props, costumes and background for your play.
a. Props should be period accurate. I don't want to see cannons in a play about Rome.
b. The background can be created on butcher paper or projected on the board from the computer.
c. Costumes should at least make it seems like I'm not looking at 3 kids from the 21st century.
3. Perform your play.
a. You may not have your scripts with you - memorize your lines!
b. Plays less than 3 minutes will not receive credit.
12. Remote Control - Level 2
Your task: Create a 3-night Prime-Time TV schedule for a channel about the current unit by coming up with TV shows that would reflect their culture.
1. Create a TV Guide chart on your own paper using the template below.
a. Create a title for your channel in the top section.
b. Create six TV shows that would be on your channel.
c. For each show indicate what type of show it would be (game show, reality, comedy, etc.)
2. Write show descriptions for 4 of your shows you created above. These descriptions must be at least three sentences and include historical information and facts related to the show. An example is provided below.
Ridezz: Chariotz (Reality): Join our host Ben Hur as he travels Rome looking for the hottest Chariots. This week Ben finds a tricked out racing chariot actually used in the Circus Maximus. You won’t believe the amazing (and maybe illegal) additions made to this ride!
13. Social Network - Level 3
Your task: Create the layout of a social network page for a historical figure. You do not actually have to create the page online, just a model of what it would look like drawn out on a piece of paper.
Create each part listed below as a rough draft. Number them on your paper.
1. The person’s name and a nickname that shows what they are known for.
2. The person’s picture.
3. The time period and location the person lived in.
4. An “about me” section that summarizes the person’s life in at least 2 paragraphs.
5. 1 “blog” where the person writes his/her opinion about an event that happened during his/her lifetime in at least 2 paragraphs.
6. Comments from at least 2 “friends” talking about this person’s life.
7. A status update showing the most important thing the person has done. “Charlemagne is uniting Europe”
8. A “likes” section detailing what things this person would enjoy (books, music, activities, etc.)
9. Show your paper to your teacher for approval. Make sure each piece is numbered.
10. Create a final draft on a clean sheet of white paper or on the computer. It should be neatly layed out, organized and colored.
15. Time Machine - Level 2
Your task: Imagine you (yes you) were transported back in time to the unit we are studying. Read the questions below and write your answers as if you were really there. You will have to do some research either in the books or on the computer to answer many of these.
Instructions: Answer the questions below on your own paper in 3-5 sentences each after completing your research. Some suggestions on what to talk about are given in parentheses after each question.
1. What would a person like you spend most of his or her time doing in this civilization? (Would you be in school? What would you study there? Working? What job would you have? Playing? What games?)
2. Draw or describe in words what clothing you would wear. (What would it be made out of? Do you think it would be comfortable?)
3. Describe some of the foods you would eat.
4. Describe where would a person like you be living? (Would you be living with your family? With co-workers? With a husband or wife? What would your house be like?)
5. Based on your answers and research would you want to go back in time to live in this civilization? Why or why not?
6. Where did you get your information? (List the books or websites used.)
14. Song Rewrite - Level 2
Your task: Rewrite the lyrics to a popular song to make it about an event or civilization from history.
1. Print or write out the lyrics to a popular song that is appropriate for school.
2. Rewrite the song with new lyrics that about all about one event, person or civilization from history. It must rhyme!
3. You must change the entire song and it should not be repetitive (the chorus, of course, can repeat).
4. Turn in both the original lyrics and your rewritten ones.
16. Perspective - Level 2
Your task: Imagine you were a specific person living in the time of the current unit. Imagine how you might respond to seeing the items below.
1.Write what person or type of person you are. (For example, Gladiator, Knight, Charlemagne, A Mayan Warrior, Japanese Court Lady, etc.)
2.Choose five of the items/things below that your person would have an opinion on.
-Nike shoes -An iPod with modern music -A female president -Your weekly chores
-Disneyland -All-You-Can-Eat Buffet -A history Textbook -Public school
-A tank -A helicopter -Your Shirt -A Laptop
-Army Soldier -Statue of Liberty -Cell phone -Stater Brothers
3.You may choose other modern objects or ideas that are not on this list if you wish.
4.Write three or four sentences for each object you choose that explain how the person would view the object or what they would think about it.
5.Write these as first-person sentences that show some information about the person and their civilization. |
Lesson 4 - Banking/Investing
Banks are in the business of credit and serve as financial intermediaries, bringing borrowers and depositors together. Banks also make bill paying easy and keep depositors’ money safe. Earning enough money is a problem for many people, but imagine that time in history before banks when keeping money safe was an enormous problem. Without banks people had to decide to carry their money with them at all times or find a place to hide it. Stories abound of unfortunate situations arising from money kept under mattresses, buried in the yard, or stashed in time cups.
Although money has been used for thousands of years, banks only came into being around the 15th or 16th century. In the United States, the early history of banking up to the 20th century included bank panics, failures and runs on the banks (when people ran to take out all their money.) In response Congress established the Federal Reserve System in 1913, and in 1934, the Federal Deposit Insurance Corporation (FDIC) to protect the money of depositors and to restore confidence in the banking system. Today people can know their FDIC deposits are safe. |
This lesson helps students become more aware of the stereotypes associated with portrayals of students and teachers on television and on film.
This is the second of three lessons that address gender stereotypes. The objective of these lessons is to encourage students to develop their own critical intelligence with regard to culturally inherited stereotypes, and to the images presented in the media - film and television, rock music, newspapers and magazines.The lesson begins with a review of stereotypes that are associated with men and women and their possible sources - including the role of the media. Students deconstruct a series of advertisements based on gender representation and answer questions about gender stereotyping about articles they have read.
In this lesson students develop awareness of the ways in which public perceptions of law enforcement have been both reflected in and influenced by film and television depictions of police over the past eighty years.
This lesson introduces students to some of the myth-building techniques of television by comparing super heroes and super villains from television to heroes and villains in the real world and by conveying how violence and action are used to give power to characters.
In this lesson students consider how well their favourite TV shows, movies and video games reflect the diversity of Canadian society.
To introduce students to the rating systems for films, videos and television and to the issues that surround these classifications.
In this lesson students learn about the history of blackface and other examples of majority-group actors playing minority-group characters such as White actors playing Asian and Aboriginal characters and non-disabled actors playing disabled characters.
This lesson examines the movie The Hunger Games: Catching Fire, some of its promotions, and social justice activists’ responses.
In this lesson, students are introduced to concepts of gender identity and gender expression and learn about common portrayals of trans people in movies and TV shows.
In this lesson, students learn to question media representations of gender, relationships and sexuality. After a brief “myth busting” quiz about relationships in the media and a reminder of the constructed nature of media products, the teacher leads the class in an analysis of the messages about gender, sex and relationships communicated by beer and alcohol ads. Students analyze the messages communicated by their favourite media types and then contrast it with their own experience. |
|San José State University|
& Tornado Alley
The heat capacity of a substance is related to the question of how much energy does it take to raise the temperature of that substance by one unit. That would depend upon how much of the substance is being considered so the answer should be in terms of the amount of energy per standardized unit of the substance. The standardized unit could be a unit of mass but the standardized unit that makes comparison between different substances easiest is a mole; i.e., the amount containing Avogadro's number (6.025×1023) of molecules (or atoms as single unit molecules).
The heat capacity of a substance is defined in the reverse direction from what was referred to above. The heat capacity per unit substance, C, is the increase in internal energy of a substance U per unit increase in temperature T:
If the substance is a gas then it is important to specify whether the gas is being held at constant volume or constant pressure. For solids the difference is negligible.
A good deal of insight may be obtained from a very simple model of a solid. Consider the solid to be a three dimensional lattice of atoms in which the atoms are held near equilibrium positions by forces. If the force on an atom is proportional to its deviation from its equilibrium position then it is called a harmonic oscillator. For zero deviation the force is zero so the force for small deviations is proportional to the deviation. Thus any such solid can be considered to be composed of harmonic oscillators. In a cubic lattice the atoms can oscillate in three directions.
The average energy E of a harmonic oscillator in one dimension is kT, where k is Boltzmann's constant. In three dimensions the average energy is 3kT. If there are N atoms in the lattice then the internal energy is U=N(3kT). Let A be Avogadro's number (6.025×1023). Then dividing and multiplying the equation of U by A gives
The ratio (N/A) is the number of moles of the substance, n, and Ak is denoted as R. Thus
The heat capacity per unit mole of a substance at constant pressure is then defined as
The value for Cp of 3R is about 6 calories per degree Kelvin. This is known as the Dulong and Petit value. It is a good approximation for the measured values for solids at room temperatures (300°K). At low temperatures the Dulong and Petit value is not a good approximation. Below is shown the heat capacity of metallic silver as a function of temperature.
The shape of the curve for T near zero is of interest. It appears to be proportional to a power of T, say T² or T³.
According to Planck's law for the distribution of energy for an ensemble (collection) of harmonic oscillators the average energy E is given by
h is Planck's constant h divided by 2*pi; and ω is the characteristic frequency (circular) of the
oscillators. This frequency is in the nature of a parameter for the substance and has to be determined empirically.
For temperature such that kT is much larger than
hω the denominator becomes approximately
hω/kT so E is approximately kT.
In general however
With a little rearrangement this can be put into the form
Now if the numerator and denominator are divided by [exp(
hω/kT)]² the result is
The attempt to obtain the limit of this expression as T→0 produces the ambiguous result of ∞/∞. The application
of l'Hospital's Rule two times finally produces the result that the limit of Cp is zero as T→0.
As mentioned above the empirical heat capacity curve for silver seems to be proportional to T² or T³ for small values of T. The graph of the quantum mechanical heat capacity function derived above indicates that for values of T near zero the heat capacity function is zero and flat.
To investigate the behavior of the heat capacity function for small T it simplify matters let
which has the dimensions of temperature,
be denoted as θ. This parameter is somethimes called the Einstein temperature because it was Einstein who first formulated this line of analysis. (Einstein was far more adept at realizing the implications of the quantization of energy found by Planck than Planck himself.)
The heat capacity function is then:
Matters can be made even simpler by replacing θ/T with z. Then
This means that ln(Cp)→−∞ as z→+∞; i.e., Cp→0 as T→0.
(To be continued.)
The term [1 − exp(−
hω/kT)]² approaches the limit of 1 as T→0 so let us ignore that
When T goes to zero the above expression goes to ∞/∞. By l'Hospital's Rule we should consider the limit of the derivatives of the numerator and denominator. Thus
Thus l'Hospital's rule must be applied again, which gives
Thus Cp → 0 as T→0.
HOME PAGE OF Thayer Watkins |
What is malaria? How can it be controlled?
Malaria is a disease caused by a single-celled parasite called Plasmodium. There are four species that regularly infect humans: P. falciparum (which causes the most severe form of the disease, and is responsible for 90% of the annual 700,000 fatalities caused by malaria, mainly in Africa), P. vivax, P. ovale and P. malariae. A fifth species, P. knowlesi, has recently also been reported in a small number of cases in south-east Asia, where prevalence appears to be increasing.
Despite it’s wide geographic range and potentially severe consequences, there are actually several effective strategies for controlling malaria, many of which have been successful of reducing the burden of the disease, and especially the number of deaths, in various regions. The first step towards control is prevention. This has largely been achieved through the distribution of long-lasting insecticide treated bednets, which prevent people from being bitten by infected mosquitoes as they sleep at night. While this has drastically reduced the number of cases of malaria in some settings, and particularly in certain high risk groups such as children under five and pregnant women, some worrying new data just was published which suggested that in high transmission zones, bednets may actually exacernate re-infection rates for older children and adults, and lead to insecticide resistance in mosquitoes. As such, while bednets clearly are still a key prevention strategy, their effect should be closely monitored.
Secondly, there is diagnosis and treatment. These go hand in hand, as they usually require the availability of health services or health professionals. If malaria infections are rapidly and accurately diagnosed, appropriate treatment can be swiftly given, preventing the progression of the disease and allowing the patient to recover. Appropriate administration of medication, as well as adherence to the full course of the drugs, can also help to prevent drug-resistance from emerging.
Finally, there are on-going research initiatives looking to find new ways to tackle malaria. For example, many scientists are involved in the search for a malaria vaccine, which, if safe, effective, and sufficiently cheap, could transform the way we think about fighting malaria. Similarly, due to the unfortunate circumstance of ever-increasing drug-resistance, particularly in Plasmodium falciparum, new types of medication are constantly being tested and trialled. The combination of all these efforts has managed to reduce the mortality of malaria greatly over the past few years; the aim now, espoused by organisations such as Malaria No More, is to get to a point where deaths from malaria are eliminated by the year 2015. |
Aluminum oxide is a common, naturally occurring compound that’s employed in various industries, most particularly in the production of aluminum. The compound is used in production of industrial ceramics. Its most common crystalline form, corundum, has several gem-quality variants, as well.
There are many different forms of aluminum oxide, including both crystalline and non-crystalline forms. It’s an electrical insulator, which means it doesn’t conduct electricity, and it also has relatively high thermal conductivity. In addition, in its crystalline form, corundum, its hardness makes it suitable as an abrasive. The high melting point of aluminum oxide makes it a good refractory material for lining high-temperature appliances like kilns, furnaces, incinerators, reactors of various sorts, and crucibles.
Use in Production of Aluminum
The most common use of aluminum oxide is in the production of metal aluminum. Metallic aluminum is reactive with oxygen, which could cause corrosion to build up. However, when aluminum bonds with oxygen to form aluminum oxide, it creates a thin coating that protects it from oxidation. This keeps the aluminum from corroding and losing strength. The thickness and other properties of the oxide layer can be changed by using the anodizing process. Aluminum oxide is also a product of the aluminum smelting process.
The most common crystalline form of aluminum oxide is corundum. Both rubies and sapphires are gem-quality forms of corundum. They owe their distinctive coloring to trace impurities. Rubies get their deep red color and laser qualities from traces of chromium. Sapphires come in a variety of colors, which come from other impurities like iron and titanium. The hardness of different kinds of corundum makes them suitable for use as abrasives and as components in cutting tools.
Uses in Ceramics
Aluminum oxide, also called alumina, is used in engineering ceramics. It’s hard and wear-resistant, resists attacks by both acid and alkali substances, has high strength and stiffness, and has good thermal conductivity, which makes it valuable in manufacturing a variety of different ceramic products. These include things like high-temperature electrical and voltage insulators, instrumentation parts for thermal test machines, seal rings, gas laser tubes, and other laboratory equipment. Aluminum oxide is also used in the production of ballistic armor.
Because aluminum oxide is fairly inert chemically, white, and relatively non-toxic, it serves as filler in plastics. It’s also a common ingredient in sunscreen. Because of its hardness and strength, it’s used as an abrasive, including in sandpaper and as a less expensive substitute for industrial diamonds. Some CD and DVD polishing kits contain aluminum oxide. The same qualities make it a good ingredient in toothpaste. Dentists use aluminum oxide as a polishing agent to remove dental stains.
Aluminum oxide has a variety of different purposes. The most important one is in the manufacturing of metallic aluminum, but this is certainly not the only one. Though you might not know it, both rubies and sapphires are composed of aluminum oxide, making it a very valuable element!
Formula for Aluminum Oxide
Properties for Aluminum Oxide
Molar mass: 101.96 g·mol−1
Melting point: 2,072 °C (3,762 °F; 2,345 K)
Boiling point: 2,977 °C (5,391 °F; 3,250 K)
Density: 3.95–4.1 g/cm3 |
Balancing Redox Reactions Problem Set
This problem set was developed by S.E. Van Bramer for Chemistry 145 at Widener University.
- Assign oxidation numbers to each element in the reactants and in the products. Identify what is being oxidized and what is being reduced. Balance the following redox reactions using both the half reaction and the oxidation number methods.
- In an acidic solution, potassium dichromate reacts with ethyl alcohol to produce aqueous chromium (III) ions, carbon dioxide and water.
- In a basic solution solid silver reacts with aqueous cyanide and oxygen gas to produce silver (I) cyanide.
- Solid aluminum reacts with solid iodine to produce solid aluminum iodide.
- Solid zinc metal reacts with aqueous hydrochloric acid to produce aqueous zinc (II) ions and hydrogen gas.
- Aqueous arsenous acid reacts with solid zinc metal to produce gaseous arsenic (III) hydride and aqueous zinc (II) ions.
- Iron (III) oxide reacts with oxalic acid to produce aqueous iron (III) trioxalate ions.
- Aqueous silver nitrate reacts with solid copper metal to produce solid silver and aqueous copper nitrate.
Please send comments or suggestions to [email protected]
Scott Van Bramer
Department of Chemistry
Chester, PA 19013
© copyright 1996, S.E. Van Bramer
This page has been accessed
times since 1/5 /99 .
Last Updated: Tuesday, April 20, 1999 8:21:18 AM |
Depression is a disturbance in mood characterized by varying degrees of sadness, disappointment, loneliness, hopelessness, self-doubt, and guilt. Most people tend to feel depressed at one time or another, but some people may experience these feelings more frequently or with deeper, more lasting, effects. In some cases, depression can last for months or even years. The most common type of depression is what is referred to as “feeling blue” or “being in a bad mood.” These feelings are usually brief in duration and have minimal or slight effects on normal everyday activities.
In the next level of depression, symptoms become more intense and last for a longer period of time. Daily activities may become more difficult…but the individual is still able to cope with them. It is at this level, however, that feelings of hopelessness can become so intense that suicide may seem the only solution.
A person experiencing severe depression may experience extreme fluctuations in moods or even a desire for complete withdrawal from daily routine and/or the outside world.
Symptoms of Depression
Depression may affect one’s life in any of the following ways:
Crying spells or, at the other extreme, lack of emotional responsiveness.
Changes in Feelings and/or Perceptions
- Inability to find pleasure in anything.
- Feelings of hopelessness and/or worthlessness.
- Exaggerated sense of guilt or self-blame.
- Loss of sexual desire.
- Loss of warm feelings toward family or friends.
Changes in Behavior and Attitudes
- Lack of interest in prior activities and withdrawal from others.
- Neglect of responsibilities and appearance.
- Irritability, complaints about matters previously taken in stride.
- Dissatisfaction about life in general.
- Impaired memory, inability to concentrate, indecisiveness, and confusion.
- Reduced ability to cope on a daily basis.
- Chronic fatigue and lack of energy.
- Complete loss of appetite, or at the other extreme, compulsive eating.
- Insomnia, early morning wakefulness, or excessive sleeping.
- Unexplained headaches, backaches, and similar complaints.
- Digestive problems including stomach pain, nausea, indigestion, and/or change in bowel habits.
Causes of Depression
Depression is often the result of an unhappy event such as the death of a loved one. When the source of depression is readily apparent and the person is fully aware of it, the individual can expect the reaction to moderate and then fade away within a reasonable amount of time. In cases where feelings of depression exist with no apparent source or the source is unclear, the depression may get worse because the person is unable to understand it. This sense of loss of control may add to the actual feelings of depression.
Any number of stressors may be involved in depression. These can include personality, environmental, or biomedical factors. Shortages or chemical imbalances in the brain may play a significant role in some cases of depression. Such imbalances may be created by illness, infections, certain drugs (including alcohol and even prescribed medications) and improper diet and nutrition. In general, depression may be viewed as a withdrawal from physical or psychological stress. Identifying and understanding the underlying causes of such stress is a necessary step in learning to cope with depression.
Being honest with yourself about changes in mood or the intensity of negative feelings as they occur will help you identify possible sources of depression or stress. You should examine your feelings and try to determine what is troubling you — relationships with family or friends, financial responsibilities, and so forth. Discussing problems with the people involved or with an understanding friend can sometimes bring about a resolution before a critical stage of stress is reached. Even mild depression should be dealt with if it interferes with your effectiveness. You might also try to:
- Change your normal routine by taking a break for a favorite activity or something new — even if you don’t feel like it;
- Exercise to work off tension, improve digestion, help you relax, and perhaps improve your ability to sleep;
- Avoid known stressors;
- Avoid making long-term commitments, decisions, or changes that make you feel trapped or confined — it is better to put them off until you feel you are better able to cope; and
- See a physician, especially if physical complaints persist.
Helping a Depressed Friend
Since severely depressed individuals can be very withdrawn, lethargic, self-ruminating, and possibly suicidal, a concerned friend can provide a valuable and possibly life-saving service. Talking candidly with the individual regarding your concern for his or her well being will often bring the problems out into the open.
As you talk with your friend, the American College Health Association advises the following:
- Do not try to “cheer up” the individual.
- Do not criticize or shame, as feelings of depression cannot be helped.
- Do not sympathize and claim that you feel the same way as he or she does.
- Try not to get angry with the depressed individual.
Your primary objective is to let the person know you are concerned and willing to help.
If feelings of depression appear to turn to thoughts of suicide, urge the individual to seek professional help. If the person resists such a suggestion and you feel that suicide is likely — seek professional help yourself, so you will know how to best handle the situation.
When Professional Help is Necessary
Depression is treatable and needless suffering of those who experience it can be alleviated. A mental health professional should be consulted when an individual experiences any of the following circumstances:
- When pain or problems outweigh pleasures much of the time;
- When symptoms are so severe and persistent that day-to-day functioning is impaired; and/or
- When stress seems so overwhelming that suicide seems to be a viable option.
Qualified mental health professionals can help identify the causes and sources of depression and can help the individual find ways to overcome them. For further assistance call the Counseling Center at 405-574-1326 for an appointment. |
Bile is the digestive liquid created by the liver and stored in the gallbladder. During digestion, bile is secreted to help promote digestion. When bile builds up in the digestive season, you can wind up vomiting bile. You can tell you are throwing up bile if your vomit begins to take on a faint brown color. If you continue to vomit bile, it will turn a greenish yellow due to the saturation of the liquid, and there may be pieces of undigested food alongside the liquid. Those who are excessively secreting bile will often experience stomach pain, and may not experience nausea before they begin to vomit. The likelihood of throwing up bile is more common approximately 20-40 minutes after consuming a meal.
Surgery. Those who have just recently undergone gallbladder surgery may be more susceptible to vomiting up bile. Patients are at risk for throwing up bile for up to 4-5 months after their gallbladder surgery. If vomiting is excessive or lasts for a long period of time, then you may have developed an infection after your surgery, or there may have been complications during your surgery. Your doctor should go over this risk and inform you what you can do to help prevent discomfort during your recovery.
Food allergies can cause the person to vomit excessively. In order to rid the body of the item which is causing the irritation, the body will trigger an immune response. This will cause quick and forceful vomiting. Since the food is still being digested when this vomiting begins, you will likely have bile in the stomach which will be excreted along with the food. Those suffering from a food allergy may also experience runny nose, stomach cramps or breathing difficulties.
Food poisoning will similarly cause the body to experience sudden, forceful vomiting. Food poisoning can be triggered when the body comes into contact with bacteria or virus from food that was not prepared properly. If the body is in the process of digesting the food, bile will usually be thrown up with the contaminated food. The bile will usually take on a yellowish color.
Gastroenteritis, commonly known as the stomach flu, can cause the body to vomit bile as the body is irritated by the virus. Frequent vomiting caused by a stomach virus can cause the body to become dehydrated, which increases the risk that bile will be included in the vomit. Patients suffering from gastroenteritis may also suffer from diarrhea, and may have trouble digesting solid food for 2-3 days.
Alcohol intolerance. If you frequently vomit bile after consuming alcohol, you may suffer from alcohol intolerance. This means your body cannot handle heavy drinking where you consume several alcoholic beverages in one sitting. When your body becomes overwhelmed by the toxic nature of the beverages, it will induce vomiting to rid itself of the poison. If the body is in the process of digesting the sugars in the beverage, there may be bile in the vomit.
An intestinal blockage preventing food from properly entering the intestinal tract can cause irritation that may lead to vomiting. The intestines will become twisted and can become sore as they attempt to project the food causing the blockage. Vomiting will become progressively forceful as the body attempts to relieve this condition. As food intake becomes limited, the body may begin to expel bile until the blockage is relieved. Those suffering from an intestinal blockage will experience severe abdominal pain in between sessions of vomiting.
Take bile acid sequestrants - To remove bile from the system, bile acid sequestrants can be taken. These will disrupt the bile circulation so the amount of bile in the system is lessened. Those who frequently suffer from bile reflux or are in the midst of dealing with a digestive disorder can take these medications to limit their symptoms. Ursodeoxycholic acid, prokinetic agents or proton pump inhibitors can also be used to limit the amount of bile in the system. These are often given to those with frequent digestive distress to help protect the esophagus from the corrosive properties of the bile.
Keep body hydrated - If you are suffering from food poisoning or the flu, it is important to work to keep the body hydrated to prevent an excessive loss of bile. Consuming electrolyte solutions slowly to keep your condition stable without introducing more liquid than your body can manage until your symptoms pass.
Limit diet - Restrict your diet to broths or bland foods that will limit the stress on your system until the vomiting and nausea pass.
Make notes - Those who are suffering from food allergies or alcohol intolerance should make note of what substances cause this reaction. Alcohol intolerance cannot be treated, so you will need to limit your alcohol intake to avoid becoming sick. If you consume more alcohol than your body can manage, antihistamines may be able to alleviate your symptoms. Talk with your doctor about what type of medication would be appropriate. When suffering from a food allergy, it is important to avoid the ingredients which cause your reaction. Allergic reactions can become progressively more dangerous over time.
See a doctor - If you are suffering from an intestinal blockage, you will need to seek medical attention. Medication can be given to help break down the blockage so it can be digested properly. If this is ineffective, it may take surgery to remove the blockage from your system. Those who frequently suffer from intestinal blockages may have a birth defect that causes their intestines to dip in an irregular way. Consult with your doctor about what can be done to prevent these symptoms like vomiting bile. |
(Phys.org) —Astronomers think that many galaxies, including our own Milky Way, have undergone similar collisions during their lifetimes. Although galaxy collisions are important and common, what happens during these encounters is not very well understood. For example, it seems likely that massive black hole(s) will form during the interactions, as the two galaxies' nuclei approach each other. Galaxy-galaxy interactions also stimulate vigorous star formation as gravitational effects during the encounters induce interstellar gas to condense into stars. The starbursts in turn light up the galaxies, especially at infrared wavelengths, making some systems hundreds or even thousands of times brighter than the Milky Way while the starbursts are underway. Studying these luminous galaxies not only sheds light on how galaxies evolve and form stars, since they act as lanterns over cosmological distances it also helps scientists study the early universe.
All this impressive progress, however, hinges on an accurate understanding of mergers and how they work. The general approach is to study many local examples to categorize their behaviors, and then model these cases with computational codes that simulate mergers. The combination of precise observations and detailed modeling, iteratively applied, helps scientists improve both their understanding of the galaxies and the physical parameters and processes included in the modeling codes. With these in hand, astronomers can start to probe the more distant universe where the objects are not as easy to measure.
CfA astronomer Lars Hernquist and five of his colleagues (many of whom were his past students) have now shown that feedback processes from bursts of star formation play a key role in determining how merging galaxies develop, at least when two massive galaxies collide. Prior models did not fully account for the role played by gas that is driven away by the radiation from a star burst, but which can sometimes fall back the galaxy. The new paper is particularly effective in describing star formation in the tails and bridges of interacting systems, something had previously been lacking.
Explore further: Image: Hubble looks at light and dark in the universe |
Portable Operating System Interface or POSIX is standardized application programming interface maintaining compatibility between application and OS. The term POSIX was proposed by Richard Stallman in response to the IEEE’s invitation to give a memorable name; POSIX was previously designated as IEEE-IX.
Brief Description of POSIX
The specification of user and software interface of the operating system is divided into four parts that make up the POSIX standard. There is a list of the standard conventions used including definitions and concepts, there is System interface ; Command line interpreter and utilities and Declarations. We are not elaborating simply because this webpage is not a text book reference page. Most of us need to know about POSIX for other usages related mainly with Linux Servers.
The standard POSIX shell is the Unix shell. Other utilities such as awk , vi or echo are also part of the POSIX standards.
POSIX compliant operating systems
The following operating systems are POSIX compliant, they adhere to the entire standard:
A / UX, AIX, BlagOS, BSD / OS, Darwin ( Mac OS X ), HP-UX, INTEGRITY, IRIX, LynxOS, MINIX, OpenVMS, penOS, QNX, RTEMS, Solaris and OpenSolaris, UnixWare, velOSity, VxWorks
The following operating systems are largely POSIX compliant, these operating systems have not been officially certified as POSIX-compliant, but to keep the standards :
BeOS and its open-source successor to Haiku, Nucleus RTOS, FreeBSD, All Linux distributions, NetBSD, OpenBSD, PikeOS, SkyOS, SuperUX, Syllable, VSTA
The following operating systems are Compatibility compliant with extensions. These operating systems are not officially certified as POSIX-compliant, but are largely compliant with POSIX and is implemented through a kind of extension compatibility (usually translation libraries) or an intermediate layer of the kernel. Without this extension, they are usually not POSIX compliant :
The NT kernel of Microsoft Windows when using the Microsoft Windows Services for UNIX, eCos, Symbian OS, AmigaOS.
Follow the Author of this article : |
Written by Robert Niles
I'll be honest. Standard deviation is a more difficult concept than the others we've covered. And unless you are writing for a specialized, professional audience, you'll probably never use the words "standard deviation" in a story. But that doesn't mean you should ignore this concept.
The standard deviation is kind of the "mean of the mean," and often can help you find the story behind the data. To understand this concept, it can help to learn about what statisticians call "normal distribution" of data.
A normal distribution of data means that most of the examples in a set of data are close to the "average," while relatively few examples tend to one extreme or the other.
Let's say you are writing a story about nutrition. You need to look at people's typical daily calorie consumption. Like most data, the numbers for people's typical consumption probably will turn out to be normally distributed. That is, for most people, their consumption will be close to the mean, while fewer people eat a lot more or a lot less than the mean.
When you think about it, that's just common sense. Not that many people are getting by on a single serving of kelp and rice. Or on eight meals of steak and milkshakes. Most people lie somewhere in between.
If you looked at normally distributed data on a graph, it would look something like this:
The x-axis (the horizontal one) is the value in question... calories consumed, dollars earned or crimes committed, for example. And the y-axis (the vertical one) is the number of datapoints for each value on the x-axis... in other words, the number of people who eat x calories, the number of households that earn x dollars, or the number of cities with x crimes committed.
Now, not all sets of data will have graphs that look this perfect. Some will have relatively flat curves, others will be pretty steep. Sometimes the mean will lean a little bit to one side or the other. But all normally distributed data will have something like this same "bell curve" shape.
The standard deviation is a statistic that tells you how tightly all the various examples are clustered around the mean in a set of data. When the examples are pretty tightly bunched together and the bell-shaped curve is steep, the standard deviation is small. When the examples are spread apart and the bell curve is relatively flat, that tells you you have a relatively large standard deviation.
Computing the value of a standard deviation is complicated. But let me show you graphically what a standard deviation represents...
One standard deviation away from the mean in either direction on the horizontal axis (the two shaded areas closest to the center axis on the above graph) accounts for somewhere around 68 percent of the people in this group. Two standard deviations away from the mean (the four areas closest to the center areas) account for roughly 95 percent of the people. And three standard deviations (all the shaded areas) account for about 99 percent of the people.
If this curve were flatter and more spread out, the standard deviation would have to be larger in order to account for those 68 percent or so of the people. So that's why the standard deviation can tell you how spread out the examples in a set are from the mean.
Why is this useful? Here's an example: If you are comparing test scores for different schools, the standard deviation will tell you how diverse the test scores are for each school.
Let's say Springfield Elementary has a higher mean test score than Shelbyville Elementary. Your first reaction might be to say that the kids at Springfield are smarter.
But a bigger standard deviation for one school tells you that there are relatively more kids at that school scoring toward one extreme or the other. By asking a few follow-up questions you might find that, say, Springfield's mean was skewed up because the school district sends all of the gifted education kids to Springfield. Or that Shelbyville's scores were dragged down because students who recently have been "mainstreamed" from special education classes have all been sent to Shelbyville.
In this way, looking at the standard deviation can help point you in the right direction when asking why information is the way it is.
Of course, you'll want to seek the advice of a trained statistician whenever you try to evaluate the worth of any scientific research. But if you know at least a little about standard deviation going in, that will make your talk with him or her much more productive.
Okay, because so many of you
Here is one formula for computing the standard deviation. A warning, this is for math geeks only! Writers and others seeking only a basic understanding of stats don't need to read any more in this chapter. Remember, a decent calculator or a stats program will calculate this for you...
Terms you'll need to know
x = one value in your set of data
avg (x) = the mean (average) of all values x in your set of data
n = the number of values x in your set of data
For each value x, subtract the overall avg (x) from x, then multiply that result by itself (otherwise known as determining the square of that value). Sum up all those squared values. Then divide that result by (n-1). Got it? Then, there's one more step... find the square root of that last number. That's the standard deviation of your set of data.
Now, remember how I told you this was one way of computing this? Sometimes, you divide by (n) instead of (n-1). It's too complex to explain here. So don't try to go figuring out a standard deviation if you just learned about it on this page. Just be satisfied that you've now got a grasp on the basic concept.
The more practical way to compute it...
In Microsoft Excel, type the following code into the cell where you want the Standard Deviation result, using the "unbiased," or "n-1" method:
=STDEV(A1:Z99) (substitute the cell name of the first value in your dataset for A1, and the cell name of the last value for Z99.)
=STDEVP(A1:Z99) if you want to use the "biased" or "n" method. |
Listen to today's episode of StarDate on the web the same day it airs in high-quality streaming audio without any extra ads or announcements. Choose a $8 one-month pass, or listen every day for a year for just $30.
You are here
The planets Venus and Jupiter have a few things in common. They’re both members of our solar system, for example, and both are quite bright as seen from Earth — brighter than anything else in the night sky except the Moon. And you can see just how bright they are this evening, as they pose side by side quite low in the western sky shortly after sunset. Venus is the brighter of the two, with Jupiter just to the left.
For the most part, though, Venus and Jupiter are about as different as two worlds can be.
Venus is small and rocky, like Earth. It’s blanketed by a dense atmosphere of carbon dioxide that makes Venus the hottest world in the solar system — temperatures average about 860 degrees Fahrenheit. An unbroken layer of clouds made of sulfuric acid tops the atmosphere. These clouds reflect most of the sunlight that strikes them, which is one reason Venus looks so bright.
Jupiter, on the other hand, is a giant — the largest planet in the solar system. It’s about 12 times Venus’s diameter. It has a dense, rocky core, but most of the planet is made of hydrogen and helium, the two lightest chemical elements.
Like Venus, Jupiter is also blanketed by clouds — in Jupiter’s case, a mixture of water vapor, ammonia, and other compounds. They also reflect a lot of sunlight, helping the giant planet shine brightly.
A third planet stands above these two bright lights: Mercury, the smallest planet in the solar system. We’ll talk about it tomorrow.
Script by Damond Benningfield, Copyright 2013
- ‹ Previous
- Next › |
States of matter
There are four common states of matter (or phases) in the universe: solid, liquid, gas, and plasma. The state of matter affects a substance's properties, such as density, viscosity (how well it flows), malleability (how easy it is to bend), and conductivity.
Common states of matter[change | change source]
Solids[change | change source]
In a solid, the positions of atoms are fixed relative to each other over long time. That is due to the cohesion or "friction" between molecules. This cohesion is provided by metallic, covalent or ionic bonds. Only solids can be pushed on by a force without changing shape, which means that they can be resistant to deformation. Solids also tend to be strong enough to hold their own shape in a container. Solids are generally denser than liquids. Solid becoming a gas is sublimation.
Liquids[change | change source]
In a liquid, molecules are attracted to other molecules strong enough to keep molecules in contact, but not strong enough to fix a particular structure. The molecules can continually move with respect to each other. This means that liquids can flow smoothly, but not as smoothly as gases. Liquids will tend to take the shape of a container that they are in. Liquids are generally less dense than solids, but denser than gases.
Gases[change | change source]
In a gas, the chemical bonds are not strong enough to hold atoms or molecules together, and from this a gas is a collection of independent, unbonded molecules which interact mainly by collision. Gases tend to take the shape of their container, and are less dense than both solids and liquids. Gases have smaller forces of attraction than solids and liquids. Gas becoming a solid is deposition
Plasmas[change | change source]
Because the positive and negative charged particles are not stuck together, plasma is a good conductor of electricity. For example, air is not good at conducting electricity. However, in a bolt of lightning, the atoms in air get so much energy that they no longer can hold on to their electrons, and become a plasma for a brief time. Then an electric current is able to flow through the plasma, making the lightning.
Phase changes[change | change source]
When a solid becomes a liquid, it is called melting. When a solid becomes a gas, it is called sublimation. When a liquid becomes a gas, it is called evaporation. When a gas becomes a liquid, it is called condensation. When a liquid becomes a solid, it is called freezing. The freezing point and the melting point are said to be the same, because any increase in temperature will cause it to melt and any drop in temperature will cause it to freeze. This is also the reason that the vaporizing and condensation point are the same.
Other states[change | change source]
Many other states of matter can exist under special conditions, including strange matter, superfluids, and supersolids, and possibly string-net liquids. Scientist work on new experiments at higher temperatures and energy levels than have ever been made. They also work on experiments at very low temperatures. Such experiments help scientists learn more about phases of matter.
Quark-gluon plasmas[change | change source]
Quark-gluon plasmas are a relatively newly discovered phase of matter that happen at about 2 trillion Kelvin. Scientists believe that protons and neutrons are held together by tiny things called quarks (which are "glued" together by things called "gluons"). At an incredibly high temperature only achievable by the Large Hadron Collider at CERN, quarks and gluons begin to separate into a new state of matter. Little is known about quark-gluon plasmas because of the huge amount of energy needed to make them.
Bose-Einstein condensates[change | change source]
Bose-Einstein condensates and fermionic condensates are phases of matter that apply to particles called bosons and fermions, respectively. (More than one boson can exist in the same spot at the same time. Only one fermion can exist in the same spot at the same time). Bose-Einstein condensates and fermionic condensates occur at incredibly low temperatures (about 4° Kelvin, which is the same as -452° Fahrenheit). Little is known about either state because of the sheer amount of energy needed to be taken away to create them. Inside of them, all of the particles begin to act like one big quantum state. That is, they have near-zero electrical resistance, and have almost no friction.
Other websites[change | change source]
- 2005-06-22, MIT News: MIT physicists create new form of matter Citat: "... They have become the first to create a new type of matter, a gas of atoms that shows high-temperature superfluidity."
- 2003-10-10, Science Daily: Metallic Phase For Bosons Implies New State Of Matter
- 2004-01-15, ScienceDaily: Probable Discovery Of A New, Supersolid, Phase Of Matter Citat: "...We apparently have observed, for the first time, a solid material with the characteristics of a superfluid...but because all its particles are in the identical quantum state, it remains a solid even though its component particles are continually flowing..."
- 2004-01-29, ScienceDaily: NIST/University Of Colorado Scientists Create New Form Of Matter: A Fermionic Condensate
- "Phases of Matter". NASA. http://www.grc.nasa.gov/WWW/K-12/airplane/state.html. Retrieved 2011-05-04.
- "Electrons, Ions, and Plasma". NASA. http://www-istp.gsfc.nasa.gov/Education/Ielect.html. Retrieved 2011-10-8. |
The state-of-the-art timekeeping technology a century ago was comprised of pendulum clocks. Refinements were made in the areas of obvious problems, such as the mechanical escapement which robs the system of energy, the vulnerability to changes in length from temperature and humidity, and vibrations. The culmination of this was the clock of W. H. Shortt, which had two pendulums, a master and a slave. The master oscillator was a free pendulum, and as it did no work to drive any mechanism, it was able to keep very precise time. The pendulum was made of invar, a material that had a very low thermal coefficient of expansion, and was encased in a chamber that was evacuated to several millitorr of pressure. The chamber was bolted to a wall that typically rested on a massive platform of the type used for telescopes, which minimized effects from vibrations. The pendulum was given an occasional boost to keep its amplitude roughly constant. The slave pendulum, which did the mechanical work of the system, received periodic electronic impulses from the master clock to correct its motion. This type of clock could keep time to better than a millisecond a day. A shortcoming (as it were) was in the measurement of the time; as Loomis notes
This remarkable result is accomplished through the possibility of averaging a large number of observations. A single impulse from a master Shortt clock has an uncertainty of 1 or 2 milli-seeonds. The master pendulum carries a small wheel. The impulse arm rests on this wheel, and as the pendulum swings out the pallet on this arm travels down the edge of the wheel, finally falling clear . It then trips an arm which falls, making the electric contact . If the small wheel is not exactly circular the arm will fall at slightly different times as the wheel is given a small turn with each fall. These variations are entirely smoothed out when a series of sparks are averaged.
So while the clock is precise in the long-term, the system of measuring it (described below) is limited at shorter durations.
By the late 1920’s true electronic timing had begun with quartz oscillators. Quartz is piezoelectric, meaning that an external pressure will induce a voltage in the crystal, and likewise, an applied voltage will cause a stress or strain in the material, and the size and shape of the material will dictate its resonance frequency. The crystal used in these measurements oscillated at 100,000 Hz, which allowed for much more precise comparisons between these types of clocks. This was necessary, as crystal oscillators tend to drift, and pair-wise measurement of at least three clocks is required to begin to properly characterize any single device.
What is then needed is a way to compare clocks and record the timing differences, and this was the Loomis chronograph. It was basically a spark chart recorder, whose motion was tied in to the master crystal oscillator located at Bell Telephone Labs in New York City; the frequency was divided down to 1 kHz (division can be done cleanly, without the addition of noise, while multiplying always adds noise) and sent over a dedicated phone line. This was fed into a mechanical device that further reduced the frequency through gears to turn a rotor at 10 revolutions per second. Also connected was a ring (which rotated with the device) with 100 equally-spaced steel phonograph needles, which were each wired to a corresponding needle above the paper of the chart recorder, like a giant comb. This made each needle correspond to a time difference of a millisecond (100 needles at 0.1 second from the rotation), and this translated directly into a measurable distance on the paper. Impulses from external clocks are fed in and triggered a 240 V discharge, which was sufficient to leave a visible mark on the paper. If the external clock was advancing at the same rate as the crystal oscillator, there would be a straight line of spark marks, but if they were at slightly different frequencies, the line would have a slope which could be measured. Multiple clocks could be recorded and compared not only to the crystal, but to each other; since the crystal was common to both measurements, if they were differenced, the crystal’s performance would drop out of the data, leaving only the comparison of the two clocks.
Three of the Shortt clocks were installed at Tuxedo Park:
The three clocks at Tuxedo are mounted on independent massive masonry piers. The piers are arranged in a triangle so that the planes of the three pendulums form an equilateral triangle. The piers are built directly on the solid rock which makes up the mountain on which the laboratory is situated. The neighbourhood is remarkably free from vibration, as the nearest railway is more than two miles distant and there is no heavy road traffic within a mile. The vault in which the piers are located is kept at a constant temperature of 21°’00 ± ’02 centigrade.
Loomis ends the paper with a few tidbits about operating the clocks, noting that there’s an optimum pressure of 15-25 millitorr; higher pressures degrade the performance from air resistance, but the lower pressures add errors from larger oscillations of the pendulum. The system works best when pumped out as low as possible and backfilled with nitrogen, since the presence of oxygen the sparks that occur at the electrical contacts caused oxidation and the loss of oxygen reduced the pressure in the chamber, affecting the clock’s rate. If a pendulum were to stop for some reason, it could be restarted by briefly opening one of the valves to let in a puff of air, which would disturb the pendulum. By repeating this several times at multiples of two seconds*, the impulses will be in phase and the amplitude can be built up, at which point the chamber can be pumped down again to its operating value. Finally, a note to avoid using all four mounting bolts, since that overconstrains the chassis (there being only three rotational degrees of freedom) and this applies stress that can cause a vacuum leak.
*Two seconds is the period of a pendulum approximately one meter in length; the square root of g is almost exactly equal to pi (they agree to better than half a percent). I managed to go completely through school (undergrad and grad) before noticing that, though I’m convinced that had I been born a few years earlier and had been required to do my physics calculations with a slide rule rather than an electronic calculator, I would have picked up on this shortcut.
It had once been proposed that the meter be defined this way, with a two-second period defining a meter, but there would have been considerable difficulty in realizing that standard due to variations in g. That quadrant of the earth representing 10,000 km was chosen instead.
Part III will discuss results using these clocks and measurement system |
Climate change protest sign written on a disused advertising billboard in front of the main smokestack at a coal-fired power station. The burning of coal at power stations increases the levels of carbon dioxide in the atmosphere. Carbon dioxide is a greenhouse gas, and is considered to be a major cause of human-induced global warming, an example of climate change. This is Didcot A Power Station near Didcot, Oxfordshire, UK. The power station began generating electricity in 1968, mostly burning coal, but also using some natural gas. The main smokestack is 198 metres tall. Didcot A has the capacity to produce 2000 MW (megawatts) of power.
Model release not required. Property release not required. |
- Trending Categories
- Data Structure
- Operating System
- C Programming
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
What is Credential Stuffing? (How it Works, How to Prevent)
What is Credential Stuffing?
Credential stuffing is a term for hacking in which a hacker secures user credentials by breaching a system and then tries to utilize those credentials with other systems. Like different types of related hacking, Credential stuffing attacks rely on hackers to break into a network and steal sensitive user information such as passwords and usernames.
Credential stuffing occurs when hackers take stolen information from one site or system and use it in a brute force hacking attempt to gain access to multiple other systems. Hackers will sometimes check whether a password or username can be used on another website or whether it is related to the original.
Hackers may, for example, get access to a list of usernames and passwords for a specific merchant and attempt to use those usernames and passwords on a banking website. The assumption is that by trying many of these assaults, hackers will be able to determine whether any users have reused passwords or user permissions, allowing hackers to access various systems using stolen login data. Credential stuffing can lead to identity theft in some cases.
How Does Credential Stuffing Work?
Extensive lists of username/password pairings that have been disclosed are used in credential stuffing attacks. In some data breaches, incorrect credential storage leads to the exposure of the whole password database. In other cases, thieves utilize password guessing attempts to breach some users' credentials. Credential stuffers can also use phishing and other similar assaults to access usernames and passwords.
These lists of users and passwords are given to a botnet, which tries to log into specific target sites with them. For example, the credentials stolen from a travel website may be checked against a vast banking institution. If any users used the same credentials on both sites, the attackers might be able to get into their accounts successfully.
Fraudsters may utilize good username/password pairs for various purposes depending on the account in question after detecting them. Some credentials allow attackers to get access to corporate networks and systems, while others allow them to make use of the account owner's bank account. This access could be used by a credential stuffing organization or sold to another party.
What Makes Credential Stuffing So Effective?
Credential stuffing assaults have a relatively low success rate, according to statistics. According to many estimates, this rate is around 0.1 percent, which means that for every thousand accounts an attacker tries to hack, they will only succeed once. Despite the low success rate, the sheer volume of credential collections traded by attackers makes credential stuffing worthwhile.
These databases include millions, if not billions, of login credentials. If an attacker possesses one million sets of credentials, they may be able to breach around 1,000 accounts.
The assault is worthwhile if even a tiny percentage of the broken accounts deliver valuable data (typically in credit card information or sensitive data that can be exploited in phishing attacks). Furthermore, the attacker can repeat the operation on several services using the same sets of credentials. Credential stuffing has also become a potential assault because of advancements in bot technology.
Deliberate time delays and blocking users' IP addresses who make many failed login attempts are common security mechanisms integrated into web application login forms. Modern credential stuffing software works around these safeguards by simultaneously deploying bots to attempt multiple logins from various device types and IP addresses.
The malicious bot's purpose is to blend the attacker's login attempts with regular login activity, which succeeds admirably. The increase in the overall volume of login attempts is frequently the only indicator that the targeted firm is being attacked. Even then, the targeted organization will have difficulty thwarting these attempts without jeopardizing legitimate users' ability to access the service.
Credential stuffing attacks are successful primarily because people reuse passwords. According to studies, most users reuse their login credentials for several services, with some estimations as high as 85 percent. Credential stuffing will continue to be profitable as long as this practice is practiced.
Brute Force Attacks vs. Credential Stuffing Attacks
Credential stuffing is a sort of cyberattack that uses brute force. However, the two are significantly different in practice, as are the best approaches to protect your systems against them. By altering the characters and numbers of passwords, brute force attacks attempt to guess them.
You can use brute force protection, a CAPTCHA, or ask your users to use a stronger password to protect themselves from failed login attempts. However, because the password is already known, a strong password will not prevent a cybercriminal from accessing an account via credential stuffing.
Even CAPTCHA or brute force defense is limited in its ability to protect users because users change their passwords in predictable patterns, and attackers have a compromised password to iterate from.
How Can Credential Stuffing Be Prevented?
Both personal and corporate security is jeopardized by credential stuffing. When a credential stuffing assault succeeds, the attacker has access to the user's account, which may contain sensitive information or the ability to conduct financial transactions or perform other privileged actions on the user's behalf. Despite the well-publicized dangers of password reuse, most users do not change their password habits.
If passwords are overused across personal and commercial accounts, credential stuffing can endanger the corporation. To reduce the danger of credential stuffing attacks, businesses can take the following steps −
Multi-Factor Authentication (MFA) − Credential stuffing attacks rely on the attacker's ability to log into an account with simply a username and password. MFA or 2FA makes these assaults more challenging because the attacker requires a one-time code to log in successfully
CAPTCHA − The majority of credential stuffing assaults are automated. CAPTCHA on login pages can prevent some automated traffic from accessing the site and testing possible passwords.
Anti-Bot Solutions − Organizations can use anti-bot solutions in addition to CAPTCHA to prevent credential stuffing traffic. These tools employ behavioral anomalies to distinguish between human and automated site users and restrict suspect traffic.
Monitoring Website Traffic − A credential stuffing attack entails many failed login attempts. An organization's ability to stop or limit these assaults may be determined by monitoring traffic to login pages.
Credential Stuffing Bots Using Lists of Breached Credentials −Credential stuffing bots usually use lists of credentials disclosed in data breaches. User passwords can be checked against lists of weak passwords or services like "HaveIBeenPwned" to see if they're vulnerable to credential stuffing.
- What is Harpooning? (How it Works, How to Prevent)
- What is Code Injection? (How it Works, How to Prevent)
- What is Heartbleed Bug? (How it Works, Vulnerable Devices, How to Prevent
- What is CIDR and how it works?
- What is Bluesnarfing and how to prevent it?
- What is IUD? How does it prevent pregnancy?
- What is Hacking and how is it performed? How to prevent hacking?
- What is Potential Transformer (P.T.) and how it works?
- What is CAPTCHA? (Uses, How it works, reCAPTCHA, Drawbacks)
- Two-Factor Authentication: What is, How It Works, Significance
- What is SQL injection? How can you prevent it?
- What Is Doxing and How Can You Prevent It?
- What is routing? Explain how it works in ASP.NET Core
- What is a parallel database and explain how it works?
- What is Rogue Security Software? (Features, What It Does, How to Prevent) |
LKS2 Science Long Term Map - Knowledge
The national curriculum for science aims to ensure that all pupils:
- develop scientific knowledge and conceptual understanding through the specific disciplines of biology, chemistry and physics
- develop understanding of the nature, processes and methods of science through different types of science enquiries that help them to answer scientific questions about the world around them
- are equipped with the scientific knowledge required to understand the uses and implications of science, today and for the future.
During years 3 and 4, pupils will be taught to use the following practical scientific methods, processes and skills through the teaching of every programme of study content:
- asking relevant questions and using different types of scientific enquiries to answer them
- setting up simple practical enquiries, comparative and fair tests
- making systematic and careful observations and, where appropriate, taking accurate measurements using standard units, using a range of equipment, including thermometers and data loggers
- gathering, recording, classifying and presenting data in a variety of ways to help in answering questions
- recording findings using simple scientific language, drawings, labelled diagrams, keys, bar charts, and tables
- reporting on findings from enquiries, including oral and written explanations, displays or presentations of results and conclusions
- using results to draw simple conclusions, make predictions for new values, suggest improvements and raise further questions
- identifying differences, similarities or changes related to simple scientific ideas and processes using straightforward scientific evidence to answer questions or to support their findings. |
Leaf miners cause damage to plants both directly and indirectly. The most direct damage is caused by the larvae mining the leaf tissue, leading to desiccation, premature leaf-fall and cosmetic damage. In tropical and subtropical areas this can lead to burning in fruit such as tomato and melon. Loss of leaves also reduces yield. In full-grown plants of fruiting vegetable crops, however, a considerable quantity of foliage can get damaged before the harvest is affected.
The older larvae make wider tunnels. Feeding spots made by adult females can also reduce yield, although except with ornamental crops, this is usually of less significance. Seedlings and young plants can be completely destroyed as a result of the direct damage caused by leaf miners.
In gerbera, the larva of the American serpentine leaf miner (Liriomyza trifolii) eats its way outwards from its egg, so that its mines join to form small plates. In various other crops one finds intermediate forms of tunnelling between these ‘plate mines’ and normal mines, making it an unreliable criterion for the identification of the species.
Indirect damage arises when disease causing fungi or bacteria enter the plant tissue via the feeding spots. |
Hearing loss isn’t confined to older adults: children of all ages can experience a loss of hearing. Roughly three out of 1000 babies are born with hearing loss, and its prevalence is increasing in adolescents. Noise-induced hearing loss is largely responsible for this increase. If you suspect your child is having difficulty hearing, seek medical attention as soon as possible. Delaying can have a strong effect on a child’s learning and development.
What Causes Hearing Loss?
There are three main causes of hearing loss in children. Congenital factors contribute to children who are born with hearing problems because of genetic issues, prenatal problems, or premature birth. Otitis media (ear infection) is a very common childhood ailment that occurs when fluid accumulates in the middle ear. This can cause difficulty hearing and, in severe cases, may lead to permanent hearing damage. Acquired hearing loss is triggered by illnesses, physical trauma, exposure to loud noises, and medications.
What Are the Symptoms of Hearing Loss?
How can you tell if your child might have a hearing loss? There are a number of signs that should prompt you to have your child’s hearing tested ASAP. These include:
- A delay in speech and language.
- Failure to respond to loud noises or your voice.
- Poor academic performance.
- Frequent ear infections.
- Disorders associated with hearing loss (i.e. Down syndrome or autism).
- Family history of hearing loss.
How Is Hearing Loss Treated?
There are numerous options for treating hearing loss in children, depending upon the type and severity of their condition. Your child’s doctor may take a wait-and-see approach when it comes to otitis media; chronic cases may be treated with medications or ear tubes that are inserted surgically and allow fluid to drain from the ears.
Permanent hearing loss can be treated with hearing aids, cochlear implants, and other hearing devices that enable a child to communicate.
The earlier you act, the less chance of your child experiencing speech of learning difficulties as the result of a hearing impairment.
Call Sound Health Services for more information or to schedule an appointment. |
Summary: Two months after injecting alpha-synuclein into the intestines of rats, researchers discovered the proteins had traveled to the brain via peripheral nerves. Four months later, the pathology was greater. Additionally, the protein had traveled to the heart. The study supports the hypothesis that Parkinson’s disease may begin in the intestinal system before migrating to the brain.
Source: Aarhus University
In 2003, a German neuropathologist proposed that Parkinson’s disease, which attacks the brain, actually might originate from the gut of the patients. Researchers from Aarhus have now delivered decisive supportive evidence after seeing the disease migrate from the gut to the brain and heart of laboratory rats. The scientific journal Acta Neuropathologica has just published the results, which have grabbed the attention of neuroscientific researchers and doctors internationally.
Harmful proteins on the move
Parkinson’s disease is characterized by slowly destroying the brain due to the accumulation of the protein alpha-synuclein and the subsequent damage to nerve cells. The disease leads to shaking, muscle stiffness, and characteristic slow movements of sufferers. In the new research project, the researchers used genetically modified laboratory rats which overexpress large amounts of the alpha-synuclein protein. These rats have an increased propensity to accumulate harmful varieties of alpha-synuclein protein and to develop symptoms similar to those seen in Parkinson’s patients. The researchers initiated the process by injecting alpha-synuclein into the small intestines of the rats. According to professor Per Borghammer and postdoc Nathalie Van Den Berge, the experiment was intended to demonstrate that the protein would subsequently spread in a predictable fashion to the brain.
“After two months, we saw that the alpha-synuclein had traveled to the brain via the peripheral nerves with the involvement of precisely those structures known to be affected in connection with Parkinson’s disease in humans. After four months, the magnitude of the pathology was even greater. It was actually pretty striking to see how quickly it happened,” says Per Borghammer, who is a professor at the Department of Clinical Medicine at Aarhus University.
Symptoms in the intestine twenty years before the diagnosis
Per Borghammer explains that patients with Parkinson’s disease often already have significant damage to their nervous system at the time of diagnosis, but that it is actually possible to detect pathological alpha-synuclein in the gut up to twenty years before diagnosis.
“With this new study, we’ve uncovered exactly how the disease is likely to spread from the intestines of people. We probably cannot develop effective medical treatments that halt the disease without knowing where it starts and how it spreads – so this is an important step in our research,” says Per Borghammer, adding:
“Parkinson’s is a complex disease that we’re still trying to understand. However, with this study and a similar study in the USA that has recently arrived at the same result using mice, the suspicion that the disease begins in the gut of some patients has gained considerable support.”
The research project at Aarhus University also showed that the harmful alpha-synuclein not only travel from the intestines to the brain but also to the heart.
“For many years, we have known that Parkinson patients have extensive damage to the nervous system of the heart and that the damage occurs early on. We’ve just never been able to understand why. The present study shows that the heart is damaged very fast, even though the pathology started in the intestine, and we can continue to build on this knowledge in our coming research,” says Per Borghammer.
Evidence for bidirectional and trans-synaptic parasympathetic and sympathetic propagation of alpha-synuclein in rats
The conversion of endogenous alpha-synuclein (asyn) to pathological asyn-enriched aggregates is a hallmark of Parkinson’s disease (PD). These inclusions can be detected in the central and enteric nervous system (ENS). Moreover, gastrointestinal symptoms can appear up to 20 years before the diagnosis of PD. The dual-hit hypothesis posits that pathological asyn aggregation starts in the ENS, and retrogradely spreads to the brain. In this study, we tested this hypothesis by directly injecting preformed asyn fibrils into the duodenum wall of wild-type rats and transgenic rats with excess levels of human asyn. We provide a meticulous characterization of the bacterial artificial chromosome (BAC) transgenic rat model with respect to initial propagation of pathological asyn along the parasympathetic and sympathetic pathways to the brainstem, by performing immunohistochemistry at early time points post-injection. Induced pathology was observed in all key structures along the sympathetic and parasympathetic pathways (ENS, autonomic ganglia, intermediolateral nucleus of the spinal cord (IML), heart, dorsal motor nucleus of the vagus, and locus coeruleus (LC)) and persisted for at least 4 months post-injection. In contrast, asyn propagation was not detected in wild-type rats, nor in vehicle-injected BAC rats. The presence of pathology in the IML, LC, and heart indicate trans-synaptic spread of the pathology. Additionally, the observed asyn inclusions in the stomach and heart may indicate secondary anterograde propagation after initial retrograde spreading. In summary, trans-synaptic propagation of asyn in the BAC rat model is fully compatible with the “body-first hypothesis” of PD etiopathogenesis. To our knowledge, this is the first animal model evidence of asyn propagation to the heart, and the first indication of bidirectional asyn propagation via the vagus nerve, i.e., duodenum-to-brainstem-to-stomach. The BAC rat model could be very valuable for detailed mechanistic studies of the dual-hit hypothesis, and for studies of disease modifying therapies targeting early pathology in the gastrointestinal tract. |
PNG images: Recycle bin
A recycling bin (or recycle bin) is a container used to hold recyclables before they are taken to recycling centers. Recycling bins exist in various sizes for use in homes, offices, and large public facilities. Separate containers are often provided for paper, tin or aluminum cans, and glass or plastic bottles.
We currently have 66 Recycle bin PNG images.
Recycle bin PNG images
Did you know
Many recycling bins are designed to be easily recognisable, and are marked with slogans promoting recycling on a blue or green background along with the universal recycling symbol. Others are intentionally unobtrusive. Bins are sometimes different colors so that user may differentiate between the types of materials to be placed in them. While there is no universal standard, the color blue is commonly used to indicate a bin is for recycling in public settings.
Recycling bins are a common element of municipal kerbside collection programs, which frequently distribute the bins to encourage participation.
Recycling is the process of converting waste materials into new materials and objects. It is an alternative to "conventional" waste disposal that can save material and help lower greenhouse gas emissions (compared to plastic production, for example). Recycling can prevent the waste of potentially useful materials and reduce the consumption of fresh raw materials, thereby reducing: energy usage, air pollution (from incineration), and water pollution (from landfilling).
Recycling is a key component of modern waste reduction and is the third component of the "Reduce, Reuse, and Recycle" waste hierarchy.
There are some ISO standards related to recycling such as ISO 15270:2008 for plastics waste and ISO 14001:2004 for environmental management control of recycling practice.
Recyclable materials include many kinds of glass, paper, and cardboard, metal, plastic, tires, textiles, and electronics. The composting or other reuse of biodegradable waste—such as food or garden waste—is also considered recycling. Materials to be recycled are either brought to a collection centre or picked up from the curbside, then sorted, cleaned, and reprocessed into new materials destined for manufacturing.
In the strictest sense, recycling of a material would produce a fresh supply of the same material—for example, used office paper would be converted into new office paper or used polystyrene foam into new polystyrene. However, this is often difficult or too expensive (compared with producing the same product from raw materials or other sources), so "recycling" of many products or materials involves their reuse in producing different materials (for example, paperboard) instead. Another form of recycling is the salvage of certain materials from complex products, either due to their intrinsic value (such as lead from car batteries, or gold from circuit boards), or due to their hazardous nature (e.g., removal and reuse of mercury from thermometers and thermostats). |
Chapter Two: The Structure of Canadian Schooling
In each province, the Department or Ministry of Education, headed by the minister of education, is the central educational authority. In some provinces postsecondary education and training is assigned to a separate minister and department, while in others both portfolios are included under one minister. The minister of education, an elected member of the provincial legislature, is appointed to the education portfolio by the premier; they are also a member of Cabinet. In the Canadian parliamentary system, the Cabinet responsible to the legislature is the key planning and directing agency of government. It approves all legislation brought forward by the government and formulates policy in education and all other areas of provincial jurisdiction. Although the drafting and passing of new laws often receives the greatest attention, it is only one of the ways in which government affects education. Since most provinces have only a few basic laws governing education, and these are not revised significantly very often, most government work in education lies outside the area of legislation. The distinctions among these various avenues of government activity are explained more fully in Chapters 3 and 4.
The role played by a minister of education at any particular period of time depends on the overall priorities of the premier and the government, and on the ability of the minister to influence these priorities. Nonetheless, because ultimate legal authority over education rests with provincial governments, ministers do play a critical role in determining how a province sets long-term educational policy and in influencing the level of funding provided to schools (see Chapter 5). They make, or approve, decisions about all sorts of educational issues, from new curricula to be introduced, to rules governing the certification of teachers, to the number of credits required for high-school graduation. The minister must defend before the public the government’s policies on education, even if they were opposed to the policy. And when parties to a local dispute at the school board or district level cannot come to an agreement, they will often call on the minister to intervene and settle the matter.
Being a government minister is an extremely demanding job. Leading a large and complex department is, in itself, a complicated task. But ministers must also participate in the work of the Cabinet as a whole, which means making decisions about all policy issues facing the province. Ministers are under constant pressure from various individuals and groups who wish to meet with them in order to influence what the government does. Ministers receive hundreds of such requests each year, just as they are asked to speak or appear at hundreds of public events. As politicians, ministers also have a responsibility to be in their constituency and available to the voters who elected them. Nor should we forget that ministers have personal lives and may reasonably want to spend time with family and friends.
Given all of this, no minister can possibly know all the details or activities under her or his authority. Most of the work of the department of education is done by civil servants within the broad guidelines set by the minister, or within agreements established by past practice. A great deal of this work is fairly routine or formalized. For example, the development of most new policies and procedures for schools normally proceeds through the work of committees, with the minister being involved, if at all, only at the end of the process in approving the final result. The issuing of certificates to new teachers (or teachers new to the province), the ongoing provision of money to school divisions, the approval of plans for new schools, the operation of distance education courses ‒ all of these activities are usually performed under the supervision of Department of Education staff. The direct involvement of ministers is usually reserved for items of great long-term importance or for those having to do with important policy directions, politically sensitive issues, or crises.
The department’s civil service is headed by the deputy minister, who is a civil servant appointed by the Cabinet. At one time, provincial deputy ministers were almost always career educators, many of whom had previously been teachers, principals, and school superintendents. In recent years, however, many provincial governments have brought deputy ministers into education from other areas of government. Unlike most other civil servants, deputy ministers serve at the pleasure of the Cabinet, which means that they can be dismissed by a government at any time. It is common in Canada when a new political party takes office after an election for the new government to replace some deputy ministers with people who are more sympathetic to its changed policy directions.
The deputy minister coordinates the work of the department in all its multiple functions. A typical department of education will have units dealing with areas such as planning, school finance, curriculum development and assessment, special education, language programs, and renovation or construction of school buildings. All of these tasks require full-time attention and some technical expertise; thus, departments of education today tend to be large organizations (although smaller than they were a decade ago) employing hundreds of people, many of whom are professional educators. The enormous range of issues dealt with in a department of education includes highly complex financial, legal, and technical questions, as well as issues typically thought of as educational, such as curriculum or school regulations.
The Department of Education is a mix of political and professional authority, embodying the tension between professional and lay control mentioned earlier in the chapter. The civil servants are generally guided by their professional training and background. Their views of the needs of education are influenced by their own background and training. They may be quite resistant to what they see as a partisan political direction taken by a government that wants public schooling to move in a certain way. The minister, on the other hand, is primarily oriented toward the political agenda of the government and to his or her own personal views and interests. The deputy minister and senior officials are caught in the middle; they are guided by professional values, but their job is to serve the duly elected minister and government. Under these circumstances, a sort of tug-of-war may occur in which ministers try to push their departments to move in particular directions, and civil servants try to convince ministers to see issues in the same ways that the civil service does. Usually, neither party feels entirely satisfied. Ministers feel that though they are elected to bring in certain policies, their will is often frustrated by unelected civil servants. Civil servants, on the other hand, feel that ministers do not always understand the subtleties of education, and may be guided by short-term political considerations at the expense of long-term educational needs. These tensions are part of the process of government and can contribute toward developing policies that are sensitive to both professional skills and public wants (Levin, 2005).
Ontario Ministry of Education Organizational Chart
Source. Ontario Ministry of Education Organizational Chart. Available at http://www.edu.gov.on.ca/eng/general/edu_chart_eng.pdf |
- Slides: 18
Poetry Learning about Poetry
Elements of Poetry What is poetry? Poetry is not: ¡ 1. Prose chopped up into lines 2. Sweet, fluffy descriptions 3. Proverbs that end in rhymes 4. Grand, stuffy language that sounds like something from the 16 th century
Elements of Poetry (cont. ) Poetry is language that’s alive. Question: What are the 3 most important ingredients in a poem? Music Emotion Magic
¡ Elements of Poetry ¡ ¡ Music is about rhythm and the way sounds rub together. Everyone has emotions. Just think of how many emotions you have felt today. Now imagine all the emotions you have ever experienced in your life. Tap into one of these feelings and make the reader feel what you felt, experience that same emotion. Magic- not hocus pocus, abra-cadabra magic, but the ability to see things around you in a whole new way.
Elements of Poetry (cont. ) To do this, poets use a variety of specific elements and techniques: 1. 2. 3. Sound Devices Figurative Language Sensory Language
Sound Devices ¡ ¡ ¡ Sound devices add a musical quality to poetry. Poets use these devices to enhance a poem’s mood and meaning. There are five common sound devices that poets use.
Sound Devices (cont. ) 1. Rhyme: the repetition of sounds at the ends of words, such as pool, rule and fool. ﺍﻟﻘﺎﻓﻴﺔ 2. Rhythm: the beat created by the pattern of stressed and unstressed syllables: The cat sat on the mat. ﺍﻹﻳﻘﺎﻉ 3. Repetition: the use of any element of language – a sound, word, phrase, clause or sentence – more than once. ﺍﻟﺘﻜﺮﺍﺭ
Sound Devices (cont. ) 4. Onomatopoeia: the use of words that imitate sounds: crash, bang, hiss, splat. ﺍﻟﻤﺤﺎﻛﺎﺓ ﺍﻟﺼﻮﺗﻴﺔ 5. Alliteration: the repetition of consonant sounds in the beginning of words: lovely lonely lights. ﺟﻨﺎﺱ
Figurative Language ¡ Figurative language is writing or speech that is not meant to be taken literally. ¡ The many types of figurative language are called figures of speech. ¡ Writers use three common figures of speech to state ideas in a vivid and imaginative way.
Figurative Language (cont. ) 1. Metaphors describe one thing as if it were something else. They often point out a similarity between two unlike things: The snow was a white blanket over the town. ������ �� 2. Similes use like or as to compare two apparently unlike things and show similarities between the two: She is as slow as a turtle. �������
Figurative Language (cont. ) 3. Personification gives human qualities to something that is nonhuman: The ocean crashed angrily during the storm. �����
Sensory Language ¡ ¡ Sensory language is writing or speech that appeals to one or more of the five senses – sight, sound, smell, taste, and touch. This language creates word pictures, or images.
Forms of Poetry ¡ Poems can tell stories, describe natural events, and express feelings. ¡ Some poems are shaped to look like their subjects, and others follow strict patterns of rhyme, rhythm or syllables. ¡ By reading poems, you can learn a new way to see something that you have looked at hundreds of times before. ¡ There are many different kinds - or forms - of poems.
Forms of Poetry (cont. ) Narrative: Poetry that tells a story in verse. Narrative poems often have elements similar to those in a short story, such as plot and characters. The Little Boy and the Old Man by Shel Silverstein Said the little boy, "Sometimes I drop my spoon. " Said the old man, "I do that too. " The little boy whispered, "I wet my pants. " "I do that too, " laughed the little old man. Said the little boy, "I often cry. " The old man nodded, "So do I. " "But worst of all, " said the boy, "it seems Grown-ups don't pay attention to me. " And he felt the warmth of a wrinkled old hand. "I know what you mean, " said the little old man.
Forms of Poetry (cont. ) Lyric: Poetry that expresses the thoughts and feelings of a single speaker, often in highly musical verse. " I Wandered Lonely as a Cloud" by William Wordsworth I WANDERED lonely as a cloud That floats on high o'er vales and hills, When all at once I saw a crowd, A host, of golden daffodils; Beside the lake, beneath the trees, Fluttering and dancing in the breeze.
Forms of Poetry (cont. ) Concrete: Poems that are shaped to look like their subjects. The poet arranges the lines to create a picture on the page. Triangle I am a very special shape I have three points and three lines straight. Look through my words and you will see, the shape that I am meant to be. I'm just not words caught in a tangle. Look close to see a small triangle. My angles add to one hundred and eighty degrees, you learn this at school with your abc's. Practice your maths and you will see, some other fine examples of me.
Forms of Poetry (cont. ) A sonnet is a fixed form of lyric poetry that consists of fourteen lines, usually written in iambic pentameter. Traditional subjects include love and faith Iambic pentameter A metrical pattern in poetry which consists of five iambic feet per line. (An iamb, or iambic foot, consists of one unstressed syllable followed by a stressed syllable. )
Types of Sonnets The Petrarchan sonnet also known as the Italian sonnet, is divided into an octave (8 lines), which typically rhymes abba, and a sestet (6 lines), which may have varying rhyme schemes. Common rhyme patterns in the sestet are cdecde, cdcdcd, and cdccdc. Very often the octave presents a situation, attitude, or problem that the sestet comments upon or resolves The English or Shakespearean : the form was introduced to England by Sir Thomas Wyatt in the 16 th century and came to maturity with Shakespeare who wrote 154 sonnets. The rhyme scheme used is abab cdcd efef gg (7 rhymes). The epigrammatic force of the last couplet is very strong – sums up the message or gives it a twist The Spenserian: has an interlocking rhyme scheme abab bcbc cdcd ee. |
Social distancing: how does it work?
Published Mar 24, 2020 • Updated Mar 25, 2020 • By Pauline Heyries
Since the appearance of SARS-Cov2, the virus that causes COVID-19, social distancing measures have been put in place to avoid transmission and stop the spread of the coronavirus. How does an epidemic spread? What is the purpose of containment measures? We tell you everything and encourage you to stay home!
What is an epidemic?
An epidemic is the outbreak and spread of a contagious infectious disease that strikes a large number of people at the same time and in the same place.
Today, the word "epidemic" extends to phenomena that are not necessarily infectious. This is the case, for example, when we speak of the "obesity epidemic".
According to the World Health Organization (WHO), a pandemic is an epidemic that affects a large number of people in a very large geographical area.
Since March 11, 2020, COVID-19 is considered a pandemic because it affects 155 countries (out of 198 recognized by the UN). Calling the COVID-19 epidemic a pandemic does not mean that the virus has become more deadly, it is simply a recognition of its global spread.
What are the mechanisms for the spread of an epidemic?
The spread of an infectious agent within a population is a dynamic phenomenon: the number of healthy and sick individuals evolves over time depending on the amount of contact between the two groups, allowing the agent to pass from an infected individual to a healthy non-immune individual.
The general transmission mechanism of an epidemic is made up of three elements:
- The pathogenic/infectious agent
- The vector
- The environment.
The vector is the living organism that transmits the pathogen/infective agent from one individual to another in its environment.
It is possible to monitor and compare epidemiological trends between cities, regions, countries or continents over time thanks to the epidemic threshold. The epidemic threshold is defined as the critical number or density of susceptible hosts required for an epidemic to occur. When the epidemic threshold is exceeded, preventive and precautionary measures may be adopted or requested by the health authorities.
If you’d like to learn more about the spread mechanism of the COVID-19 epidemic we invite you to read our article: “Coronavirus: What do you need to know?”
What are the strategies used to contain an epidemic?
In the face of an epidemic, different strategies are used to decrease contamination among the population:
- At the individual level, we can reduce the transmission of the virus by implementing barrier gestures adapted to the ways in which the virus is spread. In the case of COVID-19, the main barrier gestures are: coughing into your elbow, keeping a distance of three feet from others, using single-use tissues, etc.
- At the collective level, social distancing reduces the likelihood of contact between infected and uninfected people, thereby reducing disease transmission, morbidity (the impact of a disease on health) and mortality (the number of deaths). Social distancing is particularly effective when the infection is transmitted through respiratory droplets (as in the case of coronavirus) or direct or indirect physical contact.
Conversly, social distancing is less effective in cases where the infection is transmitted primarily through contaminated food or water or by vectors such as mosquitoes.
In the case of the COVID-19 epidemic, why does social distancing play such an important role?
The coronavirus is transmitted through respiratory droplets, so any close contact (unwashed hands, contact within three feet) with a sick person carries a risk of contamination. Furthermore, coronaviruses seem to survive for several hours (8 to 12 hours) in the outside environment on inert surfaces (door handles, tables, elevator buttons, etc.).
For the epidemic to regress, it is necessary to reduce exposure by adopting social distancing measures. This method has already proved successful during the "Spanish" flu of 1918, which claimed more victims than the First World War.
Epidemiologists fear that an "explosion" of contaminations will generate more cases than the health system can handle; in this situation, patients could die due to the lack of care because of the lack of available beds. Bearing in mind that COVID-19 is generally benign, especially in children and young adults, it can also be serious: 1 in 5 patients needs to be hospitalized. It is therefore essential to respect social distancing measures if we want to stop the spread of the virus.
In addition, a team of INSERM (the French National Institute of Health and Medical Research) epidemiologists published a study on March 14th that indicated that eight weeks of school closures and 25% home-working would be enough to delay the epidemic's peak by two months and reduce the number of cases by 40% at the height of the epidemic.
Why is it important to comply with containment measures even if you have no symptoms?
COVID-19 can manifest itself in very different ways from one individual to another. The illness can resemble a seasonal cold or flu (cough, fever, runny nose) as well as a severe respiratory infection like pneumonia or SARS (Severe Acute Respiratory Syndrome). But it can also be asymptomatic (presence of the virus in the body without symptoms or clinical signs of infection). These asymptomatic individuals are called healthy carriers. Even if they do not have symptoms, healthy carriers remain contagious and can spread the disease to others.
With or without symptoms, a person contaminated with COVID-19 can transmit the pathogen to an average of 2 or 3 people. In addition, the time between infection and the onset of symptoms (incubation period) is 3 to 5 days in most cases, but can be as long as 14 days. This means that an infected person can transmit the disease up to 2 weeks before the onset of symptoms. Under these conditions, it is understandable why it is important to comply with containment measures even if there are no symptoms.
What are the social distancing measures imposed by the government on Tuesday, March 24?
On Monday, March 16, the President announced the adoption of strong social distancing measures nation-wide:
- Encouraging people to stay home, engaging in work and schooling from home where possible
- Staying at home if you or your household members feel sick or if you have tested positive for the coronavirus
- Staying at home if you are an older person or a person with a serious underlying health condition
- Avoiding social gatherings and discretionary travel, shopping trips and social visits
- Avoiding eating out at restaurants, bars, and food courts in favor of drive-thru, pickup, or delivery options
- Stopping visits to nursing homes or retirement or long-term care facilities unless to provide critical assistance
These measures will last for two weeks from March 16, at which point the government and CDC will reexamine the situation and evaluate if they may be lifted or extended.
The following states have implemented stay-at-home orders as of Tuesday, March 24:
- Anchorage - effective March 22 until March 31.
- Effective March 19 until further notice.
- Boulder - Effective March 24 until April 10.
- Denver - Effective March 24 until April 10.
- Pitkin County - Effective March 23 until April 17.
- Effective March 23 until April 22.
- Effective March 24 until further notice.
- Miami Beach - effective March 24 until March 26, unless extended by the City Commission.
- Atlanta - effective March 24 until April 7.
- Effective March 25 until April 30.
- Blaine County - effective March 20 until April 13.
- Effective March 21 until April 7.
- Effective March 24 until April 6.
- Johnson County - effective March 24 until April 23.
- Leavenworth County - effective March 24 until April 23.
- Douglas County - effective March 24 until April 23.
- Wyandotte County - effective March 24 until April 23.
- Effective March 23 until April 13.
- Effective March 24 to April 7.
- Effective March 24 until April 13.
- St. Louis County - effective March 23 until April 22.
- Kansas City - effective March 24 until April 23.
- St. Louis - effective March 23 until April 22.
- Effective March 21 until further notice.
- Effective March 24 until April 10.
- Effective March 22 until further notice.
- Effective March 23 until April 6.
- Effective March 23 until further notice.
- Allegheny County, Bucks County, Chester County, Delaware Count, Monroe County, Montgomery County, Philadelphia County - effective March 23 until April 6
- Nashville and Davidson County - effective March 23 until April 6.
- Memphis - Effective March 24 until April 7.
- Dallas County - effective March 23 until April 3.
- San Antonio - effective March 24 until April 9.
- Tarrant County - effective March 2' until April 7.
- Collin County - effective March 24 until March 31.
- Austin - effective March 24 until April 13.
- Effective March 23 until April 6.
- Effective March 24 until further notice.
- Effective March 25 until April 24.
For more information, check your state or local government's websites.
What do I risk if I do not respect social distancing measures?
Every citizen is advised to comply with these new measures. Though the federal government has not issued specified guidelines regarding enforcement of social distancing measures, depending on your location, state or local law enforcement may have the power to issue citations or disperse gatherings where people do not comply. Check with your local government to find out more. It is also important to remember, that by not respecting social distancing strategies, you are putting not only yourself at risk of infection but your loved ones too.
While social distancing measures are effective, containment can have a significant impact on the state of stress and psychological well-being of the population. What do you do to feel less isolated? What advice do you have to help you cope better during this period of confinement? Let's share our tips in comments! |
to Brian Whittingham's
STIMULUS WORKSHEET 1
following section is set up mainly for viewing on the web.
can download a printable
and easier to use version
of this worksheet
here to download the MS Word document.
TOPIC / POEM Questions for discussion
is being spoken to in the poem?
might be the speaker in the poem?
planets aren't mentioned in the poem?
you've named all the planets, do a drawing, or improvise with drama, or draw a
solar map on the board, of where they are in relation to the sun?
planets are made from rock, and some from gas, which is which?
you find out any other peculiar or interesting facts about the solar system? (Some
are referred to in the poem). Make a class list of them and discuss.
down your cosmic address. Example:
The United Kingdom
The Solar System
The Milky Way Galaxy
you were an inhabitant of another planet what might your address be and what would
you be called? A Mercurian? A Plutonian? Or what?)
Edwin Morgan's - The First Men on Mercury (one, two or a group of pupils, then
discuss what language you might speak if you came from another planet). Also possibly
consider, what monetary system might you have, maybe draw an alien banknote, draw
an alien map of an alien town etc.
you are writing a message to be left for aliens when you arrive on their planet.
What would the message be?
it in Alien language. Example:
message We come in peace
Alien language (reverse
order of letters then add 'ly') Ewly emocly nily ecaeply.
writing this code using your own text or made up messages.
a poem about being one of the planets, or a solar wind, or a comet, or a meteoroid,
or a rocket, space-station or whatever. Who would your friends be? What kind of
plants would grow on you? What would your weather be like?
doing this from the point of view of fact then fiction. Which works best for you?
written about a cosmic postman, what else could you consider? A cosmic pupil,
a cosmic nurse, a cosmic lollipop lady, a cosmic bus conductor or whatever?
off assignments with drama improvisation, dance, and/or a drawing session. |
In mathematics, an empty product, or nullary product, is the result of multiplying no factors. It is by convention equal to the multiplicative identity1 (assuming there is an identity for the multiplication operation in question), just as the empty sum—the result of adding no numbers—is by convention zero, or the additive identity.
The term "empty product" is most often used in the above sense when discussing arithmetic operations. However, the term is sometimes employed when discussing set-theoretic intersections, categorical products, and products in computer programming; these are discussed below.
Nullary arithmetic product
Let a1, a2, a3,... be a sequence of numbers, and let
be the product of the first m elements of the sequence. Then
for all m = 1,2,... provided that we use the following conventions: and . In other words, a "product" with only one factor evaluates to that factor, while a "product" with no factors at all evaluates to 1. Allowing a "product" with only one or zero factors reduces the number of cases to be considered in many mathematical formulas. Such "products" are natural starting points in induction proofs, as well as in algorithms. For these reasons, the "empty product is one convention" is common practice in mathematics and computer programming. |
In Our Universe (Belknap), cosmologist Jo Dunkley clearly explains many of the big things we know about the universe, and how scientists came to discover them, from black holes, to distant galaxies, expanding space, and more. Complemented by simple, effective illustrations of complex astrophysical concepts and techniques, Our Universe is an engaging introduction to the nature of our cosmic home.The author: Jo Dunkley is professor of physics and astrophysical sciences at Princeton. She has won awards from the Royal Astronomical Society, the Institute of Physics, and the Royal Society for her work on the origins and evolution of the Universe.
Opening Lines: On a clear night the sky above us is strikingly beautiful, filled with stars and lit by the bright and changing Moon. The darker our vantage point, the more stars come into view, numbering from the tens or hundreds into the many thousands. We can pick out the familiar patterns of the constellations and watch them slowly move through the sky as the Earth spins around. The brightest lights we can see in the night sky are planets, changing their positions night by night against the backdrop of the stars. Most of the lights look white, but with our naked eyes we can notice the reddish tint of Mars, and the red glow of stars like Betelgeuse in the Orion constellation. On the clearest nights we can see the swathe of light of the Milky Way and, from the southern hemisphere, two shimmery smudges of the Magellanic Clouds.
Beyond its aesthetic appeal, the night sky has long been a source of wonder and mystery for humans around the world, inspiring questions about what and where the planets and stars are, and how we on Earth fit into the larger picture revealed by the sky above us. Finding out the answers to those questions is the science of astronomy, one of the very oldest sciences, which has been at the heart of philosophical inquiry since ancient Greece. Meaning ‘law of the stars’, astronomy is the study of everything that lies outsides our Earth’s atmosphere, and the quest to understand why those things behave the way they do.
Humans have been practicing astronomy in some form for millennia, tracking patterns and changes in the night sky and attempting to make some sense of them. For most of human history astronomy has been limited to those objects visible to the naked eye: the Moon, the brightest planets of our Solar System, the nearby stars, and some transient objects like comets. In just the last 400 years humans have been able to use telescopes to look deeper into space, opening up our horizons to studying moons around other planets, stars far dimmer than the naked eye can see, and clouds of gas where stars are born. In the last century our horizon has moved outside our Milky Way galaxy, allowing the discovery and study of a multitude of galaxies, and find entirely new planets around other stars. In doing so, modern astronomy continues to seek solutions to the age-old questions of how we came to be here on Earth, how we fit into our larger home, what will be the fate of Earth far in the future, and whether there are other planets that could be home to other forms of life.
Reviews: “This luminous guide to the cosmos encapsulates myriad discoveries. Astrophysicist Jo Dunkley swoops from Earth to the observable limits, then explores stellar life cycles, dark matter, cosmic evolution and the soup-to-nuts history of the Universe. No less a thrill are her accounts of tenth-century Persian astronomer Abd al-Rahman al-Sufi, twentieth- and twenty-first-century researchers Subrahmanyan Chandrasekhar, Jocelyn Bell Burnell and Vera Rubin, and many more.”—Nature |
Ptosis is also known as blepharoptosis, which has the same meaning.
(However, note that when used as a suffix, "-ptosis" denotes a lowered position of tissue, a body part, or organ.)
Ptosis refers to the drooping of the upper eyelid, that is the upper eyelid resting at a lower position than is normal. Either just one eye (unilateral) or both eye (bilateral) may be affected.
Possible causes of ptosis include:
- Disorder of the third cranial nerve, the oculomotor nerve.
In this case ptosis is likely to be accompanied by paralysis of eye movements.
In this case ptosis is likely to be accompanied by a small pupil and absence of sweating on the affected side of the face
- Myasthenia Gravis.
In this case ptosis will increase with fatigue (tiredness) and be part of more widespread fatigue.
In this case the ptosis is present from birth.
- Disease of the eye muscles.
In this case the ptosis is accompanied by weak or lacking ability to move the eye.
Treatment options include addressing any treatable cause(s).
If appropriate, surgery may be recommended and may involve adjustments to relevant tissues and facial muscles.
More about Ophthalmology:
This section includes short definitions
of many diseases, disorders, and conditions of the eyes and visual system.
For definitions of other terms in this category, choose from the list to the left (but note that this is not a complete/exhaustive list).
Other related pages include
- A diagram of the eye
- Definitions and descriptions of the parts of the eye
- A concise description of the human retina
- Definitions of parts of the retina
- Clinical and surgical procedures re. eyes and human visual system
For further information see also our pages of books about ophthalmology. |
Evidence About Earth’s Past (Book)
ABSOLUTE AGES OF ROCKS
The age of a rock in years is its absolute age. Absolute ages are much different from relative ages. The way of determining them is different, too. Absolute ages are determined by radiometric methods, such as carbon-14 dating. These methods depend on radioactive decay.
SCI-MS.ESS1.04 Construct a scientific explanation based on evidence from rock strata for how the geologic time scale is used to organize Earth’s 4.6-billion-year-old history.
- Describe radioactive decay.
- Explain radiometric dating. |
Scientists grow miniature beating hearts from stem cells
Scientists at University of California at Berkeley have induced pluripotent stem cells to form miniature hearts by using special scaffolds. Induced pluripotent stem cells are adult cells that have been genetically reprogrammed to be like embryonic cells that are capable of producing nearly any other cell.
Stem cells don’t just spontaneously form into us. The structure of the womb helps determine how the cells organize themselves. Designing the right biophysical scaffold is important, so they end up in the correct shape. The scientists have found a specific cell-patterning method to direct stem cells to form not only microchambers, but beating ones.
This could be a very important step on the way to growing new organs via your already existing cells. Even as microhearts, they’re already useful as they can be used to test the safety of drugs on a pregnant woman’s embryo’s organs, without a human trial. |
When children play, they are learning how to read and write. They learn abstract representation—that an object can represent something or someone else. Making meaning out of a jumble of letters and words takes the ability to reason abstractly. Pretend play provides an excellent cognitive foundation.
In play, children develop communication skills. They verbalize their intentions to others and negotiate rules. They explain their actions to parents and friends. As children narrate stories and describe scenes, they learn skills essential to clear and effective writing.
Children learn to regulate themselves during play, self-discipline important to learning how to read. Vygotsky, a noted child development theorist, argues that during play a child's behavior progresses from impulsive to deliberative and thoughtful.* Play prepares children to respect the basic rules inherent to reading, such as following stories from beginning to end.
Children incorporate literacy into their play. They make notes and lists on paper with their crayons. They pretend to read. They learn that they can leave marks of themselves on pieces of paper (and walls!).
During play, children learn critical problem-solving skills. These contribute to their ability to comprehend texts and read for meaning.
Be sure that your playspace provides children with the chance to freely play at their level and you will be successful in helping them become great readers in the future!
*Source: Leong, Deborah J. and E. Bodrova, R. Hensen, M. Henninger. (1999). Scaffolding early literacy through play. New Orleans, NAEYC, 1999 Annual Conference.
American Speech-Hearing-Language Association. (2007). How does your child hear and talk? https://www.asha.org/public/speech/development/chart.htm
Howard, A. (2006). Kids who blow bubbles find language is child's play. Economic & Social Research Council. https://www.eurekalert.org/pub_releases/2006-06/esr-kwb062006.php
McLane, J.B. & McNamee, G.D. (1991). The Beginnings of Literacy. Zero to Three Journal. https://www.zerotothree.org/resources/1056-beginnings-of-literacy |
Carolus Linnaeus developed a way to classify, or sort, plants and animals into groups. His method is still used today. This illustration shows how the classification works. Coyotes and gray wolves are related because they belong to the same phylum, class, order, family, and genus. However, they are different species. Their scientific names indicate that they belong to the same genus—Canis—but different species. The coyote's scientific name is Canis latrans. The gray wolf's scientific name is Canis lupus. |
Once a fire spews forth the goal is suppression, and the first step is containment. But what does containment mean and why is it so hard to achieve?
"Containment means that there’s some type of barrier between the area that has been burned, which we call ‘the black’ and an area that has not been burned which we refer to as 'the green,'" says Cal Fire public information officer Jaime Williams.
There are two types of barriers—natural and artificial. A stream or lake can act as a natural barrier. An artificial barrier is often a dirt path dug around the fire. Firefighters will use a bulldozer to create what is called a "dozer line," or manually carve out a path using picks and shovels, which is called a "hand line."
"They basically scrape the top layer of the grass off to leave bare mineral soil," says Williams. "That way the fire stops because there’s nothing to burn."
Or firefighters will employ a "hose lay," where they'll carry a synthetic hose around the fire, periodically spraying the area inside "the black." |
Jaguars (Panthera onca) are the largest felid species in the New World and the only member of the genus Panthera, the roaring cats, that occurs in the Americas. They are the third largest cat species, being outsized only by lions, (P. leo) and tigers (P. tigris). The body weight of jaguars is 90–120 kg for males and 60–90 kg for females, with a large variation in body size. Historically the range of jaguars was the southern United States through Central and South America as far south as southern Argentina. Their current range is limited to a broad belt from central Mexico through Central America to Northern Argentina.17 It is approximated that 10,000 jaguars are left in the wild with an unknown number in captivity throughout Central and South America.
The biggest conservation threats for jaguars are due to habitat fragmentation and hunting of “problem cats” (due to a real or perceived high level of livestock predation). Although the specific health threats to free-ranging jaguars are largely unknown at this time, they are probably similar to those cited for the health concerns of wildlife in general and include anthropogenic influences, often associated with increased contact that wildlife have with livestock, domestic carnivores, and humans, as well as habitat fragmentation and contamination of their habitats.7
Many infectious and non-infectious diseases have been documented in captive jaguars. Non-infectious problems include a high incidence of neoplasia which may be associated with husbandry in captivity and/or longevity. Many infectious agents have been documented to cause morbidity and/or mortality including protozoan,5 bacterial,1 and viral pathogens (i.e., canine distemper, feline infectious peritonitis)2,12. Additionally, there is serologic evidence of infection with canine distemper and feline immunodeficiency virus.2-4 It is also assumed that jaguars are susceptible to the common respiratory disease agents of domestic and non-domestic cats.
Unlike in Africa where a number of studies have provided information on the health status and diseases of free-ranging large cats and other carnivores, few studies have been conducted on the health status of jaguars in the wild, with the majority of data on parasite infection and infestation.14,15 Although the solitary nature of jaguars may minimize epidemic levels of contagious diseases (i.e., Sarcoptes and canine distemper), it is assumed that the same diseases as seen in African carnivores may cause health-related problems in free-ranging jaguars.
In 1999, the Field Veterinary Program (FVP) of the Wildlife Conservation Society was approached by the Jaguar Advisory Group of the newly developed Jaguar Conservation Program. The Jaguar Conservation Program focus is on:
1. The establishment of long-term ecologic studies of jaguars in various habitats and across a range of human impacts
2. Population status and distribution surveys in critical areas and regions where jaguar status is unknown
3. Jaguar-livestock predation research projects and rancher outreach to minimize conflicts with jaguars
4. Monitoring programs to assess and respond to changes in jaguar populations, their prey and habitats
5. Health and genetics components of jaguar populations to inform research and conservation actions
6. Range-wide education materials about jaguars and threats to their survival18
The FVP was asked to develop animal handling guidelines and to incorporate a health and disease monitoring program as a part of this species-based conservation program.
Jaguar Health Program
In October 1999, as an initial overview we presented to the Jaguar Advisory Group a working outline and plan of how our group of FVP veterinarians can and should contribute to a program directed at species-based conservation. To prepare for this presentation, an initial literature search was performed to determine what was currently known about the health status of free-ranging jaguars and to compare this with information on captive jaguars and other wild and captive large felids. During this initial search, it became clear that very little information on the health of free-ranging jaguars was available in the English, Spanish, and Portuguese literature. However, by emphasizing the role that disease has played as one obstacle to the long-term conservation of other free-ranging carnivore populations, such as canine distemper in lions,16 and rabies in African wild dogs,13 the importance of a veterinary component to this species-based conservation initiative was appreciated by the members of the Jaguar Advisory Group.
A key component of this initial presentation was emphasizing how the Jaguar Health Program would be executed. First, it was stressed that the main advantages to incorporating veterinary specialists into this species-based conservation program were:
1. To provide standardized methods for safe jaguar handling and to assess the overall health status of jaguars in the wild
2. To determine disease threats to jaguars including both direct threats (i.e., infectious diseases - intraspecific and conspecific via domestic animals, livestock, other free-ranging felids, prey items) and indirect threats (i.e., habitat fragmentation and degradation that may increase disease risks)
3. To provide recommendations, based on findings from the health assessment, for the long-term management and conservation of jaguars
Further, these objectives (advantages) were presented more specifically as to how the Field Veterinary Program staff would provide “products” for the Jaguar Conservation Program including:
1. Veterinary assistance at field sites
2. A manual with standardized immobilization techniques and biomaterial handling methods
3. A centralized sample storage and dispensing site and contacts with veterinary laboratories that are experienced in non-domestic felid diagnostics
4. A bibliography on health and disease of captive and free-ranging jaguars
5. Distribution of all written materials in English, Spanish, and Portuguese
6. The incorporation of health-related issues into policy development in conservation initiatives
One challenge to this program has been the nebulous line that separates captive and free-ranging jaguars in Latin America. In this region there are a number of projects that translocate “problem” cats and others that house confiscated jaguars, under less than hygienic conditions, with the intention of reintroduction. Few of these programs consider health prior to animal movements. Therefore, the risk that diseases in these captive jaguars may become the diseases present in the wild population must be addressed. For this reason, in September 2001 we presented the topic of animal movements and disease at the biannual wildlife biologist’s meeting in Cartagena, Colombia, reaching hundreds of Latin American conservationists.8 The risk associated with animal movement projects for jaguars, as well as other wildlife in Latin America, was a new concept for many individuals that attended the presentation. Education of the biologists performing field research is one major component of the jaguar health program.
To date, our program has distributed a manual, made recommendations for incorporation of health studies into a number of jaguar field projects, performed disease surveys of conspecific species (domestic cats/dogs, small carnivores)9,11 and prey items (brocket deer10 and armadillos), and provided veterinary support for jaguars in captivity in various Latin American countries. The manual is available on the web (http://www.savethejaguar.com/fieldvet health manual.pdf)6 (VIN editor: Original link not accessible 2–09–2021) or as a hard copy (from the FVP) in English, Spanish, and Portuguese. This manual provides information for the safe immobilization of jaguars in field conditions, as well as troubleshooting for anesthetic emergencies. Additionally, it provides information on the proper methods for the collection, storage, and transportation of biomaterials that are necessary for population health evaluations. The manual is intended for field biologists working with the jaguar conservation program, and that have experience with large cat field immobilizations. We emphasize in the manual that a wildlife veterinarian should always be included in any field project that involves jaguar handling. However, often this is not the case in many Latin American countries and many of these projects are executed without direct input from veterinarians. By distributing the manual and discussions with researchers, our role has been to minimize this less-than-ideal situation and to educate those performing the field work, whether they be biologists or veterinarians.
In conjunction with the Jaguar Conservation Program, which provides small grants to a number of researchers, we in the Jaguar Health Program have contacted small grant recipients and stressed the importance, and ease, of opportunistic collection of biomaterials such as feces (i.e., for parasitic and nutritional analyses) and hair (i.e., for genetic analyses), as well as necropsy procedures. If appropriate personnel (i.e., veterinarians) are available when handling jaguars, we also discuss the collection of more invasive (i.e., blood, biopsies) biomaterials for health assessments.
The Jaguar Health Program is one example of how veterinary medicine can and should be integrated into conservation initiatives. Multi-disciplinary teams that include biologists and veterinarians should work for the common goal of conservation. In the Jaguar Health Program example, we are providing veterinary support to minimize possible negative effects associated with conservation itself (i.e., safe immobilizations and an appreciation for the introduction of anthropozoonotic disease via research), a standardized approach for determining the disease threats to the long-term conservation of jaguars throughout their range, and recommendations for policies that may directly be counter-productive as they affect the long-term health of the free-ranging population (i.e., jaguar movements, livestock/domestic animal/jaguar interface).
Infectious and noninfectious diseases are being recognized by conservation biologists as an increasing challenge to the conservation of wildlife.7 In this context there is a growing awareness in the conservation community and willingness to collaborate with veterinarians for technical planning and implementation of conservation initiatives. Now is the time to integrate veterinary programs into large species-based conservation. It is our hope that the Jaguar Health Program can serve as a template for the integration of similar health programs into species-based conservation for the improved health of both free-ranging and captive populations.
1. Abdulla PK, James PC, Sulochana S, Jayaprakasan V, Pillai RM. Anthrax in a jaguar (Panthera onca). J Zoo An Med. 1982;13:151.
2. Appel M, Yates RA, Foley GL, Bernstein JJ, Santinelli S, Spelman LH, et al. Canine distemper epizootic in lions, tigers, and leopards in North America. J Vet Diagn Invest. 1994;6:277–288.
3. Barr MC, Calle PP, Roelke ME, Scott FW. Feline immunodeficiency virus infection in nondomestic felids. J Zoo Wildl Med. 1989;20:265–272.
4. Brown EW, Yuhki N, Packer C, O’Brien SJ. Prevalence of exposure to feline immunodeficiency virus in exotic felid species. J Zoo Wildl Med. 1993;24:357–364.
5. Cirillo F, Ayala M, Barbato G. Giardiasis and pancreatic dysfunction in a jaguar (Panthera onca): case report, evaluation, and comparative studies with other felines. In: Proceedings of the American Association of Zoological Veterinarians. South Padre Island, Texas, October 21–26. 1990:69–73.
6. Deem SL, Karesh WB. The jaguar health program manual. Jaguar Conservation Program, Wildlife Conservation Society. Bronx, New York. [Online]. Available: www.savethejaguar.com/fieldvet health manual.pdf 2001:1–45. (VIN editor: Original link not accessible 2–09–2021).
7. Deem SL, Karesh WB, Weisman W. Putting theory into practice: wildlife health in conservation. Con Biol. 2001;15:1224–1233.
8. Deem SL, Uhart MM, Karesh WB. La salud de la vida silvestre en reintroducciones - lo bueno, lo malo y lo evitable. In: Polanco-Ochoa R, Lopez-Arevalo H, Sanchez-Palomino P, eds. Manejo de fauna silvestre en la amazonia y latinamérica. 2002. (in press).
9. Deem SL, Noss AJ, Cuéllar RL, Villarroel R, Linn MJ, Forrester DJ. Sarcoptic mange in free-ranging pampas foxes in the Gran Chaco, Bolivia. J Wildl Dis. 2002. (in press).
10. Deem SL, Noss AJ, Villarroel R, Uhart MM, Karesh WB. Serologic survey for selected infectious disease agents in free-ranging grey brocket deer (Mazama gouazoubira) and domestic cattle (Bos taurus) in the Gran Chaco, Bolivia. J Wildl Dis. 2002. (submitted).
11. Fiorello C, Deem SL, Noss AJ. Disease ecology of wild and domestic carnivores in the Bolivian Chaco: preliminary results. In: Proceedings of the American Association of Zoological Veterinarians. Milwaukee, WI. 2002. (in press).
12. Fransen DR. Feline infectious peritonitis in an infant jaguar. In: Proceedings of the American Association of Zoological Veterinarians. Houston, TX, 1972 and Columbus, OH, 1973:261–264.
13. Gascoyne SC, Laurenson MK, Lelo S, Borner M. Rabies in African wild dogs (Lycaon pictus) in the Serengeti region, Tanzania. J Wildl Dis. 1993;29:396–402.
14. Hoogesteijn R, Mondolfi E. The Jaguar. Caracas: Armitano Publishers; 1998:186.
15. Patton S, Rabinowitz A, Randolph S, Strawbridge S. A coprological survey of parasites of wild neotropical felidae. J Parasitol. 1986;72:517–520.
16. Roelke-Parker ME, Munson L, Packer C, Kock R, Cleaveland S, Carpenter M, et al. A canine distemper virus epidemic in Serengeti lions (Panthera leo). Nature. 1996;379:441–445.
17. Sanderson EW, Redford KH, Chetkiewicz C-LB, Medellin RA, Rabinowitz AR, Robinson JG, et al. Planning to save a species: the jaguar as a model. Con Biol. 2002;16:58–72.
18. Save the Jaguar Website [Online]: available: www.savethejaguar.com. (VIN editor: Original link not accessible 2–09–2021). |
Existing shipwrecks, sinking oilrigs, and deploying rubble are all ways to create artificial reefs designed with approved materials for algae growth. In turn the algae attracts sea life such as barnacles, corals, and oysters creating new scuba diving hot spots and angler destinations. Marine life is drawn to these underwater habitats because they provide shelter and food.
Coral reefs tend to occur in tropical climates and do not exist in the water above the southern tip of Florida. Therefore to increase sea life and recreational opportunities in U.S. coastal waters, man-made reefs are created remaining productive for one to five hundred years.
Artificial reef projects began in the 1950s gaining considerable attention in the 1980s. The second largest artificial reef was created recently in the Florida Keys National Marine Sanctuary. The intentionally sunk 521-foot-long General Hoyt S Vandenberg, a World War II vessel, now rests under 137 feet of water in the dubbed “Florida Keys Shipwreck Trek”, an area stretching from Key Largo to Key West. Sinking preparations took months because of inspections. Workers removed contaminants such as millions of feet of wire, potential cancer-causing substances, materials containing mercury, and gallons of paint chips. The ship took about three minutes to fully deploy to the ocean floor. The entire project costs the state of Florida about $8.6 million, an expense paid for with annual tourism-related revenue mostly from divers.
Florida Keys Community College also uses the artificial reef as an underwater classroom for research. The Florida waters are home to more than 1,500 artificial reefs. These structures provide food from the algae growth and divert attention from real reefs in hopes of lessening external damage from the public that may take hundreds of years to undo.
There are other issues such as overfishing that these man-made structures are believed to address. However, some experts say artificial reefs do not replenish but rather redistribute fish populations because the fish most likely to benefit from these structures are the ones that actually reproduce at reef locations. The daily fishing pressures of real reefs has lessened because of these new sites, but that does not necessarily imply an increase in fish populations. Many commercially desired fish do not spawn at reefs.
The EPA controls artificial reef regulations. The EPA works with federal government divisions to ensure the delivery, placement, ownership and liability, and materials all meet standards. Permits for these types of structures are required. Many people sink strings of old tires and cars in hopes of creating a fishing haven. The 38-mile Alabama coastline has become the site of so many artificial reefs that it has actually altered the marine community. As popularity and the number of artificial reefs continue to increase, the debate continues as to whether science can accurately recreate natural ecosystems. |
Florence Nightingale is revered as the founder of modern nursing. Her substantial contributions to health statistics are less well known. She first gained fame by leading a team of 38 nurses to staff an overseas hospital of the British army during the Crimean War.1 Newspaper reports of unsanitary conditions at the military hospital had aroused the public, and the Secretary of War responded by appointing a team of nurses to address the situation. The Secretary was a friend of Nightingale's and knew her leadership skills. Nightingale and her team arrived in Turkey in November 1854. They found hospital conditions were far worse than reported. The wards were vastly overcrowded, patients were covered with rags soiled with dried blood and excrement, the water supply was contaminated, and the food inedible. Sewage discharged onto floors of wards and dead animals rotted in the courtyards. According to Nightingale, the hospital case-fatality rate during the first months after her arrival was 32%.2
Although Nightingale did not accept the concept of bacterial infection, she deplored crowding and unsanitary conditions. She put her nurses to work sanitizing the wards and bathing and clothing patients. Nightingale addressed the more basic problems of providing decent food and water, ventilating the wards, and curbing rampant corruption that was decimating medical supplies. She had to overcome an inept and hostile military bureaucracy, which she did in part by paying for remediation from private sources, including her own funds. She also kept careful statistics. Within 6 months, the hospital case fatality had dropped to 2%.
When Nightingale returned to London 3 years later, she was a national hero. However, within a few more years she had become an invalid herself (suffering at age 40 from what may have been chronic fatigue syndrome). Although she lived as a recluse for the next 50 years, she continued to exert substantial influence on nursing and public health through letters, books, conference presentations, and personal persuasion.3
She was skilled in mathematics and far ahead of her time in understanding the importance of health data. She argued (unsuccessfully) that Parliament should extend the 1860 census to collect data on sickness and disability, and she advocated for the creation of a Chair in Applied Statistics at Oxford University. The Royal Statistical Society acknowledged her contributions to health data by electing Florence Nightingale to membership—the first woman to be so honored—and the American Statistical Association made her an Honorary Member.
1. Cook E. The Life of Florence Nightingale
. New York: The Macmillan Company; 1942.
2. Vandenbroucke JP, Vandenbroucke-Grauls CM. A note on the history of the calculation of hospital statistics. Am J Epidemiol
3. Nightingale F. Notes on Hospitals: Being Two Papers Read Before the National Association for the Promotion of Social Science at Liverpool, in October1858: With Evidence Given to the Royal Commissioners on the State of the Army in 1857
. London: John W Parker and Son, West Strand; 1859. |
Prehypertension refers to slightly increased blood pressure. A blood pressure has two numbers: the first measures the pressure in the arteries when the heart beats and the second measures the pressure of the arteries between beats. Prehypertension occurs when levels range from 120-139mmHg over 80-89mmHg. It is a warning sign that an individual can get high blood pressure in the future if he or she doesn’t start making healthier lifestyle choices. Both prehypertension and high blood pressure increase the risk of heart attack, stroke and heart failure.Weight loss, exercise and other healthy lifestyle changes can often help control prehypertension and decrease the risks associated with it.
Prehypertension doesn't cause symptoms. The only way to detect prehypertension is to monitor and control blood pressure levels during a doctor’s visit or at home with a monitoring device.
Any factor that increases pressure against the artery walls can lead to prehypertension. Certain underlying conditions are believed to cause blood pressure levels to rise, which in turn leads to prehypertension. These conditions include:
- Obstructive sleep apnea
- Kidney disease
- Adrenal disease
- Thyroid disease
Certain medications (birth control pills, cold remedies, over the counter pain relievers and others) can also cause blood pressure to temporarily rise.
Certain factors that can increase the risk for prehypertension include:
· Age(more common in younger adults than in older adults)
· Being overweight
· Being male (more common in men than in women)
· Being of African American race
· Being physically inactive
· Being a smoker
· Having a family history of high blood pressure
· Having high levels of sodium and low levels of potassium in the body
· Drinking too much alcohol
· Having certain chronic conditions (kidney disease, diabetes and sleep apnea)
The term prehypertension is used by doctors to illustrate a time where an individual needs to start making healthier life choices such as eating healthier foods or starting to exercise regularly. Prehypertension itself doesn't often have complications, however it is likely to worsen and develop into high blood pressure (hypertension) if certain preventative measures aren’t taken.
High blood pressure can damage the organs and increase the risk of several conditions including heart attack, heart failure, stroke, aneurysms and kidney failure.
Treatment options recommended for prehypertension often making healthier lifestyle changes. If an individual has prehypertension accompanied by diabetes, kidney disease or cardiovascular disease, a doctor may recommend blood pressure medication in addition to lifestyle changes. These changes may include:
· Focusing on a healthy diet rich in fruits and vegetables and low in salt
· Maintaining a healthy weight
· Losing weight if necessary
· Exercising regularly
· Limiting the alcohol intake
· Quitting smoking
The same healthy lifestyle changes that are recommended for treating prehypertension also help prevent prehypertension and its progression to high blood pressure (hypertension). Preventative measures include:
- Eating healthier foods
- Maintaining a healthy weight
- Using less salt
- Exercising regularly
- Drinking less alcohol
- Quitting smoking
The sooner healthier habits are adopted, the better. |
When you are analysing a source, it is helpful to compare what information it provides when compared with other sources. This helps you to more successfully evaluate your sources, especially in regards to their accuracy.
Corroboration is the ability to compare information provided by two separate sources and find similarities between them.
When a second source provides the same or similar information to the first, the second source is considered to corroborate (e.g. support, or agree with) with the first.
Finding corroboration between sources strengthens your conclusions, especially when you are making a historical argument.
When choosing sources to corroborate, pick those that are deemed particularly reliable, which adds further certainty to your claims.
In order to identify information that is agreed upon by two different sources, following these steps:
To help you complete the above steps successfully, you can use a Venn Diagram or a table like the one below, to organise your thoughts:
|Information Found in Source 1||Information Found in Source 2||Information Found in Both Sources|
If, in the process of finding corroboration between sources, you find that the two sources provide information that is different to each other, you have potentially discovered contradiction between them. This is another source analysis skill and you can find out more about contradiction here.
Demonstrating source corroboration in your writing:
Smith says that “the Spartans’ victory was dependent upon their superior military training” (1981, 31), which is supported by Jones who says that “the Persians could not match the Spartan’s disciplined tactics” (1994, 56-62).
Influential anti-imperialist writers Chomsky and Blum say the same thing: violence should not be used by anti-US campaigners (Chomsky, 1998, 34; Blum, 2006, 112).
Chomsky and Blum, influential anti-imperialist writers, have both cautioned members of more extremist anti-US movements against violence (Chomsky, 1998, 34; Blum, 2006, 112). |
Learning to program FPGAs will stand you in good stead for a lot of different jobs in both development and software engineering. FPGAs can be used to read sensors, control machines and provide data and analysis. They are an important part of controlling a lot of major machines and plants.
How To Learn FPGA Programming?
Learning FPGA programming is not easy. There is a wealth of information about basic programming concepts out there, but not so much about FPGAs. The best starting point would probably be to develop some familiarity with programming C or a similar language, so that you understand concepts such as loops, IF statements, case statemetns, variables and constants, as well as simple operations. Once you understand this, you can start learning FPGA-specific concepts. A lot of low cost fpga board xilinx are programmed using Verilog, which is quite similar to C in terms of syntax.
FPGAs are, essentially, a huge array of gates, with built in interfaces, endpoints, memory controllers and other additions to them. It would be perhaps more fair to say that an FPGA is a large scale, highly capable CPLD, rather than just blittling it by saying that it is just a bunch of gates.
Verilog Programming Language
Verilog is a programming language that can be used to describe the operations that you want to do. Before HDLs such as Verilog became popular, programmers would use a schematic to lay out what they wanted to do. Schematics are still useful, but they do not scale well. When you start trying to do the schematics for more complex applications it gets harder to keep track of what is going on. Schematics can be useful for people who have some background in electronics, but it is a very good idea to learn how to work with an HDL. This will stand you in good stead when you reach the point where you are woking with more complex applications.
You don’t need an IDE to work with Verilog – although it will help. You can use a good text editor – something such as Notepad++ which has some good indentation management and syntax highlighting features. You will also need to download the development tools from Xilinx themselves.
You can learn Verilog quite easily, since the syntax is so similar to plain old C, however the difference between ‘knowing the syntax’ and ‘being a good programmer’ is massive. Good programmers have an understanding of not just the syntax, but also how to handle the issue of concurrency. In most standard programming tasks you would be thinking of one thing happening at a time. In FPGAs, many things are happening at the same time.
Verilog deals with what is happening in a digital circuit. Verilog uses modules that are the equivalent to components in the circuit – such as a gate or a complex entity such as an ALU. Modules are, in some ways, similar to C++ classes, and they can be instantiated and used in a similar way. In Verilog, you would describe the inputs, outputs and assignments for each part of the circuit. |
view a plan
This yummy fall math idea uses candy corn to illustrate inequalities
Title – Candy Corns Are the Greatest!
By – Melanie Garrison
Primary Subject – Math
Grade Level – 1 – 2
Materials: candy corn patterns, candy corn inequality cards, pencils, number cards
- 1. TLW determine which numbers are lesser or greater than other numbers.
- 2. TLW write inequalities in numerical and verbal form.
- 3. TLW order numbers from highest to lowest on paper.
Procedure (20-minute lesson):
- 1. Write random numbers on card stock squares. Give each student a different number. (Numbers can range from 10 to 30.)
- 2. Ask students to compare their number with a partner. Have them make an inequality with their number and their partner’s number. Which number is lesser, or which number is greater? (12 < 21 or 24 > 15).
- 3. Ask students read their inequalities aloud to you.
- 4. Trace a candy corn on poster board or paper. Turn the candy corn on its side. Write one number on the pointy top and one number on the bottom.
- 4. Give a candy corn to each student and ask them to write a > or < in the middle of the candy corn.
- 5. Ask each student to read his or her inequality aloud to you. Write all of the inequalities on the board or overhead.
- 6. Write a few of the inequalities backwards. (If you have 11 < 20, write 20 > 11.)
- 7. Then ask everyone to write down the reversals of all the inequalities on the board, including the ones you did as an example.
- 8. You may give out candy corn at the end of the lesson if you want to do so.
Closure: Review lesson. Review > and < signs and how you use them.
Evaluation: Use the activity from #7 above for an assessment.
E-Mail Melanie Garrison ! |
This Learning Activity is divided into three sections. Adults should guide children through it in the following order:
- Minds On: Introduces the learning concepts to be explored in the Learning Activity.
- Action: Offers a focused activity to explore the content and discover key concepts.
- Consolidation: Provides students with an opportunity to deepen understanding and reflect on learning. |
The Drake passage is a body of water. It lies between the Atlantic and Pacific oceans. To the north is Cape Horn and the South American continent, to the south are the South Shetland Islands, now part of the British Antarctic Territory. It is part of the Southern Ocean. It is named after the English privateer Sir Francis Drake, who accidentally discovered it. He never sailed the passage, because sailing the Strait of Magellan was less dangerous.
The Drake passage is also the shortest route from Antarctica to the rest of the world. The only islands in the passage are the Diego Ramirez Islands, about 60 km south of Cape Horn.
The passage is also known for very rough seas. Waves of 10 m are not uncommon here.
In old books, the passage is called Drake Strait.
Other websites[change | change source]
- National Oceanography Centre, Southampton page of the important and complex bathymetry of the Passage
- A personal story describing crossing the Passage
- A NASA image of an eddy in the Passage
- Larger-scale images of the passage from the US Navy (Rain, ice edge and wind images)
- BBC News story on a scientific study dating the age of the Drake Passage |
Story Writing with Given Outline for Class 5 English
The students will learn to develop story writing with given outline. They will also study some outline story writing examples of story writing with images for better understanding
In this learning concept, the students will learn:
- To write stories with the given outlines.
- To avoid certain common mistake while story writing with outline.
Every concept is taught to class 5 English students with the help of examples, illustrations, and concept maps. Once you go through a concept, assess your learning by solving the printable outline story writing worksheet given at the end of the page.
Download the worksheets and check your answers with the worksheet solutions for the concept Story Writing with Outline provided in PDF format.
A story outline is a pre-writing tool used to organize your story. It's a way to visually see the arc of your story and the major points you need to hit, so you can create a comprehensive plan for writing your book.
How to Develop a Story with an Outline?
Look at the outline of a story given below.
First day in school – Ravi – fourth grade in school – got ready – nervous – reached 10 mins before – class was full – stand up and pray – teacher – first chapter of English – ten minutes break – played badminton – continued class – lunch break in 12.30 – Dosa – class till 2.30 – school got over - good day.
The First Day
Today is the first day of Ravi’s fourth grade in his school. He woke up early in the morning and got ready. He was slightly nervous as it was the first day of the new grade. He reached school 10 minutes before time. He could see the class was already full of students. After sometime the teacher came and asked everyone to stand up and pray. Then, she started to teach the first chapter of the English textbook. After the class got over, the students got a ten-minute break. He went to the playground and played badminton with his classmates for some time. After that, they continued to have classes until they had their lunch break at 12:30 pm. Ravi ate dosa prepared by his mother for lunch. After the lunch break, the students had two more classes till 2:30. Finally, at 2:30 pm everybody left for home. It was a good day indeed.
Give the title to the story. Make sure the title is relatable to the story.
The First Day
First, take a look at the characters given in the outline. They can be people, animals, or objects.
Ravi, students, classmates, badminton.
Next, see the information given in the outline and make them into complete sentences to form the story.
Outline: First day in school – Ravi – fourth grade in
Complete sentence Today, it is the first day of Ravi in fourth grade in his school.
- Follow the sequence as given in the story. Make sure to write the story in the same sequence. The Beginning, middle and end of the story should be same in the story.
Do not add any such detail which is not relevant to the outline given. In such case, the additional information will take the story in a different path.
Ram – studying in grade 2 – went to the field – cricket with friends - ball went high in the sky – broke a window pane – uncle got angry – scolded – gave them the ball. |
Rare, gem-studded meteorites that resemble stained-glass windows when backlit may have come from magnetic asteroids that splintered apart in ancient collisions, scientists say.
The solar system once may have been full of swarms of these tiny magnetic asteroids, investigators add.
The space rocks known as pallasites, first discovered in 1794, are very rare, with only about 50 known. These meteorites are mixtures of iron-nickel metal and translucent, gem-quality crystals of the green mineral olivine.
"How you get a mixture of metal and these gem-like crystals has been a longstanding mystery," lead study author John Tarduno, a geophysicist at the University of Rochester in New York, told Space.com. "Because of the density differences of these materials, you'd normally think they'd separate into two different groups." [7 Strangest Asteroids Ever]
Chemical analyses have suggested the pallasites came from at least three different asteroids.
The researchers speculated that any magnetized material within these meteorites might shed light on their formation, since asteroids would possess magnetic fields only under certain special circumstances.
Magnetic meteorite mystery
The researchers looked at metal specks encapsulated within olivine crystals in two pallasites. These crystals are far better at recording past magnetic conditions than the surrounding metal is.
The investigators used a laser to heat the metal grains past their individual Curie temperatures — the point at which a metal loses its magnetization. The grains were then cooled in the presence of a magnetic field in order to become re-magnetized. By monitoring the grains using a highly sensitive measuring instrument called a SQUID ("superconducting quantum interference device"), the research team was able to calculate the strength of the magnetic field that these metal particles once possessed.
The scientists found these metal specks were once strongly magnetized. This suggests the meteorites came from asteroids that were themselves once strongly magnetic, perhaps 4.2 billion to 4.4 billion years ago.
Earth's magnetic field is created by its dynamo, the churning in its molten metal core. Since asteroids are relatively small, they would have cooled quickly and no longer possess molten cores or magnetic dynamos. However, recent analyses suggest that Vesta, the second-largest asteroid in the solar system, once possessed a magnetic dynamo.
Ancient asteroid crashes
Past research had suggested that pallasites originate in the boundary layer between an asteroid's metallic core and rocky mantle, arising from the mixing of material one might find there. However, this would not explain the magnetization — if the pallasites formed this way, they would not have cooled sufficiently to become permanently magnetized before any dynamo in the asteroid decayed.
Instead, the research team's computer models suggested these magnetic pallasites formed when asteroids collided with much larger asteroids, protoplanet-sized bodies about 250 miles (400 kilometers) wide. The impact would have injected a liquid mix of iron and nickel from the cores of the smaller asteroids into the larger ones, explaining the jumble of materials seen within the meteorites. The pallasites would have formed while the dynamos of these protoplanets was still active.
Space news from NBCNews.com
Teen's space mission fueled by social media
Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew.
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
"If pallasites really are made of metal from one object and minerals from another, then there might be chemical 'fingerprints' we can look for to prove this hypothesis," study author Francis Nimmo, a planetary scientist at the University of California, Santa Cruz, told Space.com. "Another critical measurement to make is to get the ages of the minerals. Our models predict particular age ranges for these minerals, which can be tested against age measurements."
Tarduno noted the meteorites they analyzed represent only one of the parent asteroids of pallasites. "We'd like to sample some of the others," he said. "The techniques we've used here can be applied to meteorites of other small bodies as well."
Past research suggests thousands of protoplanets at least 60 miles (100 km) wide once dwelt within the solar system. The new findings suggest many of these might have been magnetic.
"The more small bodies we study, the more dynamos we find," Nimmo said. "The problem is that we don't understand what is driving those dynamos. Did they operate like the Earth's dynamo, or are they driven another way — for example, by their iron cores sloshing around after a giant impact?"
The scientists will detail their findings in the Friday issue of the journal Science.
- Space Rocks! Photos of Meteorites for Sale
- Photos: Asteroids in Deep Space
- Protoplanet Vesta's South Pole Was Double Pummeled | Video
© 2013 Space.com. All rights reserved. More from Space.com. |
What is light pollution? Simply put, light pollution is the unwanted illumination of the night sky created by human activity. Light pollution is sometimes said to be an undesirable byproduct of our industrialized civilization. Light pollution is a broad term that refers to multiple problems, all of which are caused by inefficient, annoying, or arguably unnecessary use of artificial light. Specific types of light pollution include light trespass, over-illumination, and sky glow.
Where is light pollution found? The now-classic Earth at Night composite image (left) suggests that light pollution is a problem in many parts of the world, with the worst concentration of light pollution being found in urbanized areas. In the highly industrialized and populated areas such as many parts of Europe, Asia, and North America, light pollution is a real problem. For example, in the Eastern United States, there are many areas where large expanses of land are illuminated at night. As cities and suburban areas grow, the number of lights at night also increases.
Why do we care about light pollution? Light pollution is a strong indicator of wasted energy. Lights, contrast, and glare all impact the number of stars that are visible in a given location. Only the brightest stars are visible when there is a lot of nighttime lighting. Many people in the urban locations have never seen the Milky Way.
Astronomers (both professional and amateur) have been concerned about the deteriorating quality of the night sky for some time. The excess of light has resulted in obscuring the night sky making observations difficult. It is not surprising to learn that astronomers need very dark skies to conduct their observations and research.
In addition to the concerns of astronomers, we have learned that light pollution causes problems to human and environmental health. Medical research on the effects of excessive indoor light on the human body suggests a variety of adverse health effects including increased headaches, fatigue, and stress.
There is also a strong case that light pollution is harmful to the economy as well as our ecology. When you look at the Earth at Night image above, think about all that light escaping into space. All of this light is wasted, so all the energy that was produced and consumed to create the light was also wasted. Ultimately, everyone pays for this wasted energy.
With the pervasive level of light pollution, the natural patterns of light and dark have been altered, impacting animal behavior. Lights at night can impact both the biology and ecology of species in the wild. Some examples include the disorientation of sea turtle hatchlings by beachfront lighting; nesting choices and breeding success of birds; behavioral and physiological changes in salamanders; disturbances of nocturnal animals; and altered natural light regimes in terrestrial and aquatic ecosystems.
Since the 1980's, there has been a global movement to learn more about light pollution, its impacts, and ways to mitigate or reduce its effects. Light pollution impacts most of the world's citizens in one way or another. It may be that you no longer are able to go outside and enjoy an unobstructed view of the night sky. |
American renaissance also known as the new england renaissance, the american renaissance refers to a period of american literature from the 1830s to the end of the civil war. Essay questions cite this literature note critical essays the renaissance theater bookmark this page manage my reading list the medieval drama had been an amateur endeavor presented either by the clergy or members of the various trade guilds the performers were not professional actors, but ordinary citizens who acted only in their spare time. Renaissance was a time of rebirth of the studies of the greeks and romans, as well as the start of new ideas some ideas that were created in the renaissance include: individualism, secularism and humanism individualism was the concept of the individual and the belief to be able to reach the best of its abilities. Renaissance literature refers to european literature which was influenced by the intellectual and cultural tendencies associated with the renaissance the literature of the renaissance was written within the general movement of the renaissance which arose in 14th-century italy and continued until the 16th century while being diffused into the.
American renaissance (literature) essays: over 180,000 american renaissance (literature) essays, american renaissance (literature) term papers, american renaissance (literature) research paper, book reports 184 990 essays, term and research papers available for unlimited access. One such paradigm was introduced in henry nash smith’s virgin land, which saw in american renaissance literature a tension between civilization, selected literary essays from james russell lowell (boston: houghton mifflin, 1914), 281 correspondence of emerson and carlyle, 185. The renaissance literature essay topics in this chapter provide you with succinct and interesting prompts to challenge your students you can access these topics at any time on your mobile device. Renaissance: impact on english literature renaissance is a french word which means rebirth, reawakening or revival in literature the term renaissance is used to denote the revival of ancient classical literature and culture and re-awakening of human mind, after the long sleep in the medieval ages, to the glory, wonders and beauty of man's earthly life and nature.
Renaissance is the fresh term was used to describe an entire period of rebirth – “rebirth” of ancient traditional, took as its foundation the art of classical antiquity, but transformed that tradition by the absorption of recent developments in the art of northern europe and by application of contemporary scientific knowledge. Renaissance art it is agreed that the renaissance was a period of great art and architectural feats and ingenuity, during which artists looked back to the classical art of greece and rome from which to draw inspiration. Writing a compare and contrast essay can be a challenge, especially if you decided to delay working on it until the very end further complicating things is having to write on a vast subject such as medieval literature vs renaissance literature as both have a rich history. Harlem renaissance essay: harlem renaissance is a revival movement of african-american culture in the interwar period its birthplace and home are the neighborhood of harlem, in new york this excitement extends over many areas of creativity, the arts as photography, music, or painting, but it is mainly the literature that is considered the most remarkable feature of the development. The harlem renaissance and its effect on african american literature thesis: the literary movement during the harlem renaissance was a raging fire that brought about new life for the african american writer its flame still burns today through the writings of contemporary african american writers.
Love and marriage in renaissance literature in medieval europe, the troubadours (poets of the southern part of france), like guilhem ix, or cercamon, first began to write poems. The renaissance was a revival or a rebirth of cultural awareness and learning among art, law, language, literature, philosophy, science, and mathematics this period took place between the fourteenth and sixteenth centuries. French literature, french studies, renaissance humanism, renaissance literature (renaissance studies) “sufficienti e fedeli”: aristotelian and biblical patterns in the prince, chapter xxii the essay is part of the volume machiavelli’s prince traditions, text and translations, edited by nicola gardini and martin mclaughlin (rome: viella. The renaissance era marked a major cultural turning point for british culture as the philosophy of humanism emerged, which engendered literature to become increasingly vibrant and free- thinking both canonical texts provide significant insight into both the medieval and renaissance era. Books shelved as renaissance-literature: hamlet by william shakespeare, a midsummer night's dream by william shakespeare, macbeth by william shakespeare.
The renaissance the renaissance, meaning “rebirth” in french, was a change in the way people lived and thought in the middle ages in europe, especially italy, people were very religious and almost everyone was devotedly catholic. Introduction the drama of renaissance england was truly remarkable and not just because william shakespeare wrote during that era among his colleagues as dramatists were christopher marlowe, thomas kyd, ben jonson, thomas middleton, and john webster, all of whom wrote plays of lasting greatness. Renaissance essays the rebirth of literature, art, and new development of the family structure, was created as a transition from the disastrous middle ages to the age of rebirth, the renaissance these areas of advancements became the major differences of the middle ages and the renaissa. Critical essay playboy western world and the lever arm but renaissance essays the vulnerable text on literature never less than twice as likely to manifest itself, computers and information from other countriesand i am portant characteristic of the particle around the world wide web be the loss of detail it is. Renaissance essay paper topics the main features of renaissance essay should speak about classical antiquity, belief in individual dignity as a human being, radical changes in the general outlooks about philosophy, religion and science.
The renaissance era of literature the renaissance era embraces the period between 14th and 16th centuries the term renaissance itself meaner the rebirth what in some respect is referred to the rebirth from the obscurity of middle ages and is originated from a french word. Custom renaissance humanism essay the interest in greek and roman literature was spreading across europe in 12 th century and led to the development of the humanist movement in the 14 th century. Humanism of the renaissance period was the predominant movement that revolutionized philosophical, intellectual, and literary customs it first originated in italy during the fourteenth century and eventually spread to other major areas in europe such as greece. The harlem renaissance showed the unique culture of african americans and redefined african american expression it began in the early 1920's where african american literature, art, music, and dance began to flourish in harlem, a neighboorhood in new york city. |
Part I: Rudiments, Units 1-7
Study Guide 1: Pitch Names, Intervals, Scales, Key Signatures, Triads; Rhythm and Meter
Musical pitches are named using the first seven letters of the alphabet. On the white notes of the piano keyboard, the note C is always located immediately to the left of a “two-group” of black keys. The note F is located just to the left of a “three-group” of black keys.
It is often helpful to indicate the specific octave placement when naming pitches. A system widely used today assigns a number to the notes from C up to B within each octave. Middle C equals C4. The lowest C on the piano is C1.
A more traditional method labels each octave with names (such as contra and great) in combination with letters and numbers. The great staff and the full piano keyboard are shown below, with numerical and traditional names across the full range of the keyboard. (Click to enlarge.)
See the video: Introduction to Intervals
An interval is the distance between two pitches. The distance from one note on the keyboard to the next closest note up or down is a half step. Two half steps combine to form a whole step.
Intervals are classified as major (M), minor (m), perfect (P), diminished (d), and augmented (A).
Seconds, thirds, sixths, and sevenths may be major, minor, diminished, or augmented.
The unison, fourth, fifth, and octave may be only perfect, augmented, or diminished.
A minor second equals one half step. A major second equals one step.
A minor third spans a step and a half step. The major third spans two steps.
A perfect fourth spans two steps and a half step. The augmented fourth spans three steps.
Smaller intervals can combine to form larger intervals, as follows:
A diminished fifth spans a perfect fourth and a minor second. A perfect fifth spans a major third and a minor third.
A minor sixth spans a perfect fifth and a minor second. A major sixth spans a perfect fifth and a major second.
A minor seventh spans a perfect fifth and a minor third. A major seventh spans a perfect fifth and a major third.
Inversion of Intervals
An interval is inverted by transferring its lower note into the higher octave or by transferring its higher note into the lower octave. Major intervals invert to minor intervals; minor intervals invert to major intervals.
Seconds and sevenths are classified as dissonances.
Thirds and sixths are classified as imperfect consonances
Perfect intervals invert to perfect intervals. The P4, P5, P1, and P8 are perfect consonances.
Augmented intervals invert to diminished intervals; diminished intervals invert to augmented intervals. All augmented and diminished intervals are dissonances.
The Major Scale
The major scale is comprised of a fixed pattern of whole steps and half steps:
You can build a major scale starting on any note if you follow the pattern of whole steps (major seconds) and half steps (minor seconds).
The three forms of the minor scale allow for the variable sixth and seventh scale degrees. Those scale degrees are not altered in the natural minor scale:
The harmonic minor scale employs the raised seventh scale degree to form the leading tone. An augmented second results between the sixth and seventh scale degree:
The melodic minor scale raises both the sixth and seventh scale degrees ascending. The descending form is equivalent to the natural minor.
For each major key, there is a relative minor key that shares the same key signature. The tonic note of the relative minor key is a minor third below the major key tonic note.
Realtive keys share the same key signature:
Parallel keys share the same tonic:
A major triad has a major third between the root and third, a minor third between the third and the fifth, and a perfect fifth between the root and the fifth.
A minor triad has a minor third between the root and third, a major third between the third and the fifth, and a perfect fifth between the root and the fifth.
An augmented triad has a major third between the root and third, a major third between the third and the fifth, and an augmented fifth between the root and the fifth.
A diminished triad has a minor third between the root and third, a minor third between the third and the fifth, and a diminished fifth between the root and the fifth.
Putting it All Together: Intervals, Triads, and Scales
It is important to understand how notes in a scale may combine to form intervals and triads. Here below is a major scale ascending through intervals of a second:
A major scale ascending in thirds:
Here are the intervals, expanding from the perfect unison to the perfect octave, ascending from the tonic note in the major scale:
The intervals descending from the tonic note in the major scale:
The triads in the major scale are shown below. Note that the triad built on the 7th scale degree, the leading tone, is a diminished triad.
Rhythm and Meter
Meter is the organization of musical time into recurring patterns of accent. Each complete unit constitutes a measure. The first beat of every measure is called the downbeat. The beat unit divides to form smaller note values. Simple division of the beat is binary, dividing the beat value into two equal parts. As shown below, 2/4 meter provides an example. Every beat can divide to form a pair of eighth notes. The beat unit is the quarter note. The background unit is the largest possible division of the beat. In this case, it is the eighth note.
In simple meters the upper number of the time signature indicates the number of beats, and the lower number indicates the note value of the beat. The meter signature of 2/4, with two quarter-note beats per measure, is a simple duple meter. Meters having two beats per measure are duple; three beats per measure, triple; four beats per measure, quadruple.
“Cut time” or 2/2 meter is another example of simple duple meter. Theqre are two half-note beats per measure; the background unit is the quarter note.
Compound meters have triple division, with three background units per beat. Consequently, the beat unit in compound meter is always a dotted note. In compound meters, the upper number indicates the number of background units, and the lower number, the value of the background unit. To find the number of beats per measure, divide the upper number in the meter signature by three. In this example of compound duple meter, the dotted quarter receives the beat; the background unit is the eighth note:
This minuet provides an example of simple triple meter, with three beats per measure and duple division of the beat.
In this example of compound triple meter, the dotted quarter receives the beat; the background unit is the eighth note:
“Common time” or 4/4 meter is a simple quadruple meter. There are four quarter-note beats per measure, with duple division:
In this example of compound quadruple meter, the dotted eighth receives the beat. Triple division of the beat is characteristic of the gigue: |
Diarrhea in American English, (spelled diarrhoea in other anglophone countries) is a condition in which the sufferer has frequent and watery or loose bowel movements.
This condition can be a symptom of injury, disease or foodborne illness and is usually accompanied by abdominal pain, and often nausea and vomiting. There are other conditions which involve some but not all of the symptoms of diarrhea, and so the formal medical definition of diarrhea involves defecation of more than 200 grams per day (though formal weighing of stools to determine a diagnosis is never actually carried out).
It occurs when insufficient fluid is absorbed by the colon. As part of the digestion process, or due to fluid intake, food is mixed with large amounts of water. Thus, digested food is essentially liquid prior to reaching the colon. The colon absorbs water, leaving the remaining material as a semisolid stool. If the colon is damaged or inflamed, however, absorption is inhibited, and watery stools result.
Diarrhea is most commonly caused by myriad viral infections but is also often the result of bacterial toxins and sometimes even infection. In sanitary living conditions and with ample food and water available, an otherwise healthy patient typically recovers from the common viral infections in a few days and at most a week. However, for ill or malnourished individuals diarrhea can lead to severe dehydration and can become life-threatening without treatment.
It can also be a symptom of more serious diseases, such as dysentery, cholera, or botulism and can also be indicative of a chronic syndrome such as Crohn's disease. It is also an effect of severe radiation sickness.
It can also be caused by excessive alcohol consumption, especially in someone who doesn't eat enough food.
Symptomatic treatment for diarrhea involves the patient consuming adequate amounts of water to replace that lost, preferably mixed with electrolytes to provide essential salts and some amount of nutrients. For many people, further treatment and formal medical advice is unnecessary. The following types of diarrhea generally indicate medical supervision is desirable:
* Diarrhea in infants.
* Moderate or severe diarrhea in young children.
* Diarrhea associated with blood.
* Diarrhea that continues for more than 2 weeks.
* Diarrhea that is associated with more general illness such as non-cramping abdominal pain, fever, weight loss, etc.
* Diarrhea in travelers (more likely to have exotic infections such as parasites)
* Diarrhea in food handlers (potential to infect others)
* Diarrhea in institutions (Hospitals, child care, mental health institutes, geriatric and convalescent homes etc).
Since most people will ignore very minor diarrhea, a patient who actually presents to a doctor is likely to have diarrhea that is more severe than usual.
This may defined as diarrhea that lasts less than 2 weeks, and is also called gastroenteritis.
This can nearly always be presumed to be infective, although only in a minority of cases is this formally proven.
It is often reasonable to reassure a patient, ensure adequate fluid intake, and wait and see. In more severe cases, or where it is important to find the cause of the illness, stool cultures are instituted.
The most common organisms found are Campylobacter (an organism of animal or chicken origin), salmonella (also often of animal origin), Cryptosporidiosis (animal origin), Giardia Lamblia (lives in drinking water). Shigella (dysentery) is less common, and usually human in origin. Cholera is rare in Western countries. It is more common in travelers and is usually related to contaminated water (its ultimate source is probably sea water). Escherichia coli is probably a very common cause of diarrhea, especially in travelers, but it can be difficult to detect using current technology. The types of E. coli vary from area to area and country to country.
Viruses, particularly rotavirus, are common in children. (Viral diarrhea is probably over-diagnosed by non-doctors). The Norwalk virus is rare.
Toxins and food poisoning can cause diarrhea. These include staphylococcal toxin (often milk products due to an infected wound in workers), and Bacillus cereus (eg rice in Chinese takeaways). Often "food poisoning" is really salmonella infection.
Parasites and worms sometime cause diarrhea but often present with weight loss, irritability, rashes or anal itching. The commonest is pinworm (mostly of nuisance value rather than a severe medical illness). Other worms, such as hookworm, ascaria, and tapeworm are more medically significant and may cause weight loss, anemia, general unwellness and allergic problems. Amoebic dysentery due to Entamoeba histolytica is an important cause of bloody diarrhea in travelers and also sometimes in western countries which requires appropriate and complete medical treatment.
It is not uncommon for diarrhea to persist. Diarrhea due to some organisms may persist for years without significant long term illness. More commonly a diarrhea will slowly ameliorate but the patient becomes a carrier (harbors the infection without illness). This is often an indication for treatment, especially in food workers or institution workers.
Parasites (worms and amoeba) should always be treated. Salmonella is the most common persistent bacterial organism in humans.
These tend to be more severe medical illnesses. Malabsorption is the inability to absorb food, mostly in the small bowel but also due to the pancreas.
Causes include celiac disease (intolerance to gluten, a wheat product), lactose intolerance (Intolerance to milk sugar, common in non-Europeans), fructose malabsorption, Pernicious anemia (impaired bowel function due to the inability to absorb vitamin B12), loss of pancreatic secretions (may be due to cystic fibrosis or pancreatitis), short bowel syndrome (surgically removed bowel), radiation fibrosis (usually following cancer treatment), other drugs such as chemotherapy, and of course, diarrhea-predominant irritable bowel syndrome.
Inflammatory bowel disease
There are of unknown origin but are likely to be abnormal immune responses to infection. There is some overlap but the two types are ulcerative colitis and Crohn's disease:
* Ulcerative colitis is marked by chronic bloody diarrhea and inflammation mostly affects the distal colon near the rectum.
* Crohn's disease typically affects fairly well demarcated segments of bowel in the colon and often affects the end of the small bowel.
Other important causes
* Ischaemic bowel disease. This usually affects older people and can be due to blocked arteries.
* Bowel cancer: Some (but NOT all) bowel cancers may have associated diarrhea. (Cancer of the large colon is most common)
* Hormone-secreting tumors: some hormones (e.g. serotonin) can cause diarrhea if excreted to excess (usually from a tumor).
Treatment of diarrhea
1. Do nothing except ensure adequate fluid intake. This is the most appropriate treatment in most cases of minor diarrhea.
2. Try eating more but smaller portions. Eat regularly. Don't eat or drink too quickly.
3. Intravenous fluids or a "drip": Sometimes, especially in children, dehydration can be life-theatening and intravenous fluid may be required.
4. Oral rehydration therapy: Taking a sugar/salt solution, which can be absorbed by the body.
5. Anti-diarrhea drugs: use cautiously as they are said to prolong the illness and may increase the risk of a carrier state. They are useful in some cases, however, when it is important that you don't have diarrhea (e.g. when traveling on a bus). Loperamide is the most commonly used antidiarrheal.
6. Antibiotics: antibiotics may be required if a bacterial cause is suspected and the patient is medically ill. They are sometimes also indicated for workers with carrier states in order to clear up an infection so that the person can resume work. Parasite-related diarrhea (e.g. giardiasis) require appropriate antibiotics. Antibiotics are not routinely used, as the cause is rarely bacterial and antibiotics may further upset intestinal flora and worsen rather than improve the diarrhea. Clostridium difficile-associated diarrhea and pseudomembranous colitis is often caused by antibiotic use.
7. Dietary manipulation: especially avoid wheat products with celiac disease.
8. Hygiene and isolation: Hygiene is important in limiting spread of the disease.
9. It is claimed that some fruit, such as bananas, mangoes, papaya and pineapple may have positive effects on this condition. Bananas have the merits of being easily obtainable, and they are unlikely to have any other significant unwanted side effects. Bananas are thought to be "binding," as is mucilage, which you can obtain in capsule form. Mucilage can also be used as cereal for babies, as it is easily digested. The high acid content of pineapple may make the tasty tropical treat a bad choice for people suffering from chronic diarrhea.
The information above is not intended
for and should not be used as a substitute for the diagnosis and/or treatment
by a licensed, qualified, health-care professional. This article is licensed
under the GNU Free Documentation
License. It incorporates material originating from the Wikipedia article
Copyright © 2012 Anxiety Zone - Anxiety Disorders Forum. All Rights Reserved. |
That seems to be the case in Japan, where researchers at RIKEN Nishina Center for Accelerator-Based Science say they’ve created element 113, a missing piece on the periodic table.
113, an elusive theoretical element with 113 protons, has been just out of reach for scientists since the first man-made atom was crafted in 1940. Scientists at RIKEN say they collided zinc nuclei (with 30 protons) with a sheet of bismuth (83 protons). 113 didn’t last very long; as a larger atom it would be unstable, and scientists say it quickly started shedding protons. Nevertheless, they say their data will show that 113 did exist, if only for a moment.
If the data proves true, Japan will get the naming rights to the element. So far, only the United States, Russia and Germany have sired and christened new little (or big) atoms.
But if these atoms are so unstable and scientists are seemingly making them only to give them names like Americium, what’s the point?
It’s simple. By playing around with atoms, just like a kid playing with an erector set, scientists will learn new things about how they work, just like a kid will learn about spacial reasoning, mechanics, etc. … a good case to squeeze in that play time. |
Astronomical Terminology and Perspective
Students will be able to answer conceptual questions using correct astrophysical terminology about the following core astronomy concepts:
* the motion of the Earth and the objects seen in the visible sky
* how and into what the Universe is organized by gravity at all scales from the solar system to the superclusters of galaxies.
* the significant and unique characteristics of each planet and other components of the solar system.
* the essential physical concepts that govern the life cycle of stars from creation to death, how the Sun compares to other stars, and the changes the Sun and stars will go through during their life cycles.
* the structure and classification of galaxies and clusters of galaxies.
* the central ideas of and evidences for current big bang cosmologies.
Students will be able to identify common naked eye constellations, bright stars and deep sky objectsin the sky.
Students will gain practical experience by performing their own astronomical observations, interpreting their observations and communicating their results.
Appreciation of our Creator's Universe
Students will appreciate the grandeur of our Savior's universe. |
Sunset on MarsOn May 19, 2005, NASA's Mars Exploration Rover Spirit captured this stunning view as the Sun sank below the rim of Gusev crater on Mars. This Panoramic Camera mosaic was taken around 6:07 in the evening of the rover's 489th Martian day, or sol.
Sunset and twilight images are occasionally acquired by the science team to determine how high into the atmosphere the Martian dust extends, and to look for dust or ice clouds. Other images have shown that the twilight glow remains visible, but increasingly fainter, for up to two hours before sunrise or after sunset. The long Martian twilight (compared to Earth's) is caused by sunlight scattered around to the night side of the planet by abundant high altitude dust. Similar long twilights or extra-colorful sunrises and sunsets sometimes occur on Earth when tiny dust grains that are erupted from powerful volcanoes scatter light high in the atmosphere. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.