content
stringlengths 275
370k
|
---|
Cosmic Colors will take you on a wondrous journey across the entire electromagnetic spectrum. Discover the many reasons for color—like why the sky is blue and why Mars is red. Take a tour within a plant leaf and journey inside the human eye. Investigate x-rays by voyaging to a monstrous black hole and then back at your doctor’s office. You will even see the actual color of a dinosaur--based on recent evidence. Get ready for an amazing adventure under a rainbow of cosmic light! The image on the right includes visuals from the show (click the image for a larger image).
We start in the daytime sky, fast-forward through sunset to the nighttime sky, and give a brief orientation to objects in the current night sky. We point out that everything we are seeing is because of the light that we are receiving from these distant objects. And the same is true for the things we see in our everyday life. This provides a good introduction to the 32-minute pre-recorded fulldome show, in which we explore different types of light along the electromagnetic spectrum.
The units of study in this guide include hands-on science activities about the electromagnetic spectrum.
Resource Type: Classroom Activity, Lesson Plan, Educator Guide Grade Level: 5-8
- Introductory Pages
- A Brief History of the United States Astronomy Spacecraft and Crewed Space Flights
- Unit 1: The Atmospheric Filter
- Unit 2: The Electromagnetic Spectrum
- Unit 3: Collecting Electromagnetic Radiation
- Unit 4: Down to Earth
- Unit 5: Space-Based Astronomy on the Internet
- Additional Resources
Electromagnetic Math is a huge collection of activities designed to supplement teaching about electromagnetism. Students explore the simple mathematics behind light and other forms of electromagnetic energy including the properties of waves, wavelength, frequency, the Doppler shift, and the various ways that astronomers image the universe across the electromagnetic spectrum to learn more about the properties of matter and its movement.
Resource Type: Classroom Activity, Lesson Plan, Educator Guide, Grade Level: 5-8, 9-12
This afterschool curriculum includes six lessons plus supplementary materials (e.g., videos, PowerPoint presentations, and images) that explore how light from the electromagnetic spectrum is used as a tool for learning about the Sun. The curriculum is designed to be flexible to meet the needs of afterschool programs and includes recommendations for partial implementation based on time constraints. It was specifically designed to engage girls in science.
Resource Type: Classroom Activity, Lesson Plan, Educator Guide, Grade Level: 5-8
- Lesson #1: Solar Cookie
- Lesson #2: Exploring the EM Spectrum
- Lesson #3: Rainbows of Light: the Visible Light Spectrum
- Lesson #4: Invisible Light: Ultraviolet
- Lesson #5: Detecting Invisible Light
- Lesson #6: Our 3D Sun
Become a crime scene investigator! Learners model Dawn Mission scientists, engineers, and technologists and how they use instrumentation to detect distant worlds.
Resource Type: Classroom Activity, Lesson Plan, Educator Guide, Grade Level: 8-adult
“Project Spectra!” is a science and engineering program for 6th – 12th grade students, focusing on how light is used to explore the Solar System. “Project Spectra!” emphasizes hands-on activities, like building a spectrograph, as well as the use of real data to solve scientific questions. (NASA approved and funded)
Resource Type: Classroom Activity, Lesson Plan, Educator Guide, Grade Level: 6-12
- Patterns and Fingerprints
- Graphing the Rainbow
- Using Spectral Data to Explore Saturn & Titan
- Goldilocks and the Three Planets
- Building a Fancy Spectrograph
- Using a Fancy Spectrograph
- A Spectral Mystery
- Designing an Open Spectrograph
- Designing a Spectroscopy Mission
- Marvelous Martian Mineralogy
- Star Light, Star Bright? Finding Remote Atmospheres
- Solving a Mixed Up Problem
- Enceladus, I Barely Knew You
- Planet Designer: What’s Trending Hot?
- Planet Designer: Kelvin Climb
- Planet Designer: Martian Makeover
- Planet Designer: Retro Planet Red |
All of the major groups of animal parasites are found in fish, and apparently healthy wild fish often carry heavy parasite burdens. Parasites with direct life cycles can be important pathogens of cultured fish; parasites with indirect life cycles frequently use fish as intermediate hosts. Knowledge of specific fish hosts greatly facilitates identification of parasites with marked host and tissue specificity, while others are recognized because of their common occurrence and lack of host specificity. Examination of fresh smears that contain living parasites is often diagnostic.
The most common parasites of fish are protozoa (see Table: Protozoan Parasites of Fish). These include species found on external surfaces and species found in specific organs. Most protozoa have direct life cycles, but the myxosporidia require an invertebrate intermediate host.
|PrintOpen table in new window
Protozoa Infecting Gills and Skin
Ciliated protozoa are among the most common external parasites of fish. Most ciliates have a simple life cycle and divide by binary fission. Ciliates can be motile, attached, or found within the epithelium. The most well-known organism in the latter group is Ichthyophthirius multifiliis, which has a more complex life cycle than the other ciliates.
The infection caused by I multifiliis is referred to as “ich” or “white spot disease.” The parasite is an obligate pathogen that cannot survive without the presence of living fish. All fish are susceptible, and a similar appearing parasite, Crytocaryon irritans, occurs in marine species. The parasite is readily transmitted horizontally via direct exposure to infected fish or via fomites (nets, etc). Fish that survive an outbreak may be refractory in future outbreaks, but may also serve as a source of infection to previously unexposed individuals. The parasite invades epithelial tissue of gills, skin, or fins, leaving a small wound and visible white spot or nodule where each parasite encysts. The organism causes substantial damage because of its unique life cycle (see below), which allows a rapid intensification of infection. Infected fish are extremely lethargic and covered with visible white dots. Mortality can be rapid and catastrophic. Infections that are confined to gill tissue may not be recognized by nonprofessionals (because white spots are not grossly visible), but are easily diagnosed using gill biopsy techniques. The organism is identified using a light microscope at magnification of 40× or 100×. It is large (0.5–1 mm), round, covered with cilia, and has a characteristic horseshoe-shaped macronucleus. Its characteristic movement varies from constantly rotating to ameboid-like.
Ich infections require immediate and thorough medical treatment. Formalin or copper are often drugs of choice. Over-the-counter medications for pet fish often contain formalin and malachite green and are effective, but due to regulatory concerns regarding the use of malachite green, should not be dispensed by the veterinarian. Multiple chemical treatments (with intervals determined by water temperature) are required for successful treatment of I multifiliis. At warm temperatures typical of home aquaria (eg, >26°C), infected fish should be treated every 2–3 days. Constant chemical exposure for at least 3 wk is generally recommended to control Cryptocaryon in marine systems; lowering salinity to 16–18 ppt is often helpful.
I multifiliis has a direct life cycle, but has massive reproductive potential from each adult parasite. Adults leave the fish host and encyst in the environment, releasing hundreds of immature parasites (tomites) that must find a host within a specific time frame (days for warm water fish, weeks for cold water fish), determined by temperature. For this reason, leaving a system fallow is one means of preventing reinfection. While encysted, parasitic life stages are refractory to chemical treatment, but cysts can be removed by thorough cleaning and removal of debris from gravel substrates.
There are two important groups of ciliates that are motile and move on the surface of skin and gills of fish. They include Chilodonella spp (which has a marine counterpart, Brooklynella spp) and the trichodinids, which are found on both freshwater and marine fish. Fish with chilodonelliasis typically lose condition, and copious mucous secretions may be noticed in areas where infestation is most severe. If gills are heavily infested, the fish may show signs of respiratory distress, including rapid breathing and coughing. The gills may be visibly swollen and mucoid. Infected fish may be irritated as evidenced by flashing (scratching) and decreased appetite. Chilodonella can be easily identified from fresh biopsies of infected tissues. They are 0.5–0.7 mm, are somewhat heart-shaped with parallel bands of cilia, and move in a characteristic slow spiral. See Protozoan Parasites of Fish for treatment.
Several genera of peritrichous ciliates have been grouped together and are collectively referred to as the trichodinids. These include Trichodina, Trichodinella, Tripartiella, and Vauchomia spp. Clinical signs associated with trichodinid infestation are similar to those of chilodonelliasis, although secretion of mucus is not usually as noticeable. Trichodinids are easily identified from biopsies of infected gill or skin tissue. They are readily visible using a light microscope at 40–100×. Trichodinids move along the surface of infested tissue and appear as little saucers or, from a lateral view, as little bubbles. The body of the organism may be cylindrical, hemispherical, or discoid. Trichodinids are characterized by an attaching disk with a corona of denticles on the adoral sucker surface. For treatment of trichodinids, see Protozoan Parasites of Fish. Infestations of Trichodina often indicate poor sanitation and/or overcrowding, so chemical treatment alone may not be adequate for complete control.
Tetrahymena corlissi, another ciliate, may be motile and surface-dwelling but is also occasionally found within tissue, including skeletal muscle and ocular fluids. Similar protozoa, Uronema spp, are found on marine fish. Tetrahymena spp are pear-shaped and 10–20 μm long, with longitudinal rows of cilia and inconspicuous cytostomes. External infestations of Tetrahymena spp are not uncommon on moribund fish removed from the bottom of a tank or aquarium and are often associated with an environment rich in organic material. As long as Tetrahymena spp are restricted to the external surface of the fish, they are easily eliminated with chemical treatment and sanitation. When they become established internally, they are not treatable and can cause significant mortality. Fish with intraocular infections of Tetrahymena spp develop extreme exophthalmos. The parasite is readily identified by examining ocular fluids with a light microscope.
Ambiphyra and Apiosoma are sessile ciliates that can be found on the skin, gills, and fins of fish. These seem to be more common in pond fish than tank-reared fish and have a predilection for organically rich environments. They are not generally found on marine fish. When examined from a lateral view, Ambiphyra is the shape of a tin can with a ciliated band around the middle and at the cytostome, which is distal to the attachment site. Apiosoma spp are vase-shaped. Neither Ambiphyra spp nor Apiosoma spp are particularly pathogenic if present in low numbers (no more than 1–2/low-power field); however, when present in high numbers, these parasites can cause significant epithelial damage, predisposing fish to opportunistic pathogens in the environment and compromising respiration and osmoregulation. Infested fish demonstrate flashing, decreased appetite, loss of condition, and hyperplasia of infested epithelial surfaces. Severe infestation of the gills is particularly damaging. The organisms can be controlled with a single treatment of formalin, copper sulfate, potassium permanganate, or a salt dip. Excessive crowding and poor sanitation are frequently associated with heavy infestations and should be corrected.
Heteropolaria spp are stalked, colonial ciliates that most frequently attach to bony surfaces of fish, particularly the tips of fin rays and opercula. It is most common in freshwater gamefish, particularly centrarchids, (eg, largemouth bass, bluegill, and sunfish) and is frequently associated with development of “red sore disease.” In the earliest stages of infection, bony protuberances appear slightly raised and erythematous; as the colony grows, they appear cottony. Examination of material with a light microscope is required to differentiate Heteropolaria spp from fungal hyphae, and mixed infections are common. Further progression of the disease is typified by development of shallow ulcers on the lateral surface of the fish. A wet mount of fresh tissue from the margin of the lesion is required to differentiate between Heteropolaria spp, fungal hyphae, and columnaris bacteria. Coinfection with the bacterium Aeromonas hydrophila is typical of red sore disease. If deaths occur, a single treatment of potassium permanganate or copper sulfate should be administered. If systemic bacterial infection is a component of the epizootic, antibiotics should be provided in medicated feed, if affected fish will accept a pelleted diet.
Ichthyobodo spp, Costia spp are some of the most common and smallest (∼15 × 5 μm) flagellated protozoan parasites of the skin and gills. They are flattened, pear-shaped organisms with 2 flagellae of unequal lengths. These parasites can be found on freshwater or marine fish from a broad geographic range. Ichthyobodo move in a jerky, spiral pattern, and free-swimming organisms are fairly easy to identify in direct smear preparations. Once attached, the organism can be difficult to see, but movement typical of a flickering flame may be seen under 400× and is characteristic. Affected skin often has a steel-gray discoloration due to copious mucus production (“blue slime disease”), and gills may appear swollen. Behavioral signs of infestation include lethargy, anorexia, piping, and flashing. Ichthyobodo is readily controlled with salt, formalin, copper sulfate, or potassium permanganate baths. Because the parasite has a direct life cycle, a single treatment should be adequate. If reinfestation occurs, sanitation and quarantine practices should be evaluated.
One of the most serious health problems of captive marine fish is the parasitic dino-flagellate Amyloodinium spp. Its freshwater counterpart, Oodinium spp, is less common but can also result in high mortality. These parasites produce a disease that has been called “velvet,” “rust,” “gold-dust,” and “coral disease” because of the brownish gold color they impart to infected fish. The pathogenic stages of the organism are pigmented, photosynthetic, nonflagellated, nonmotile algae that attach to and invade the skin and gills during their parasitic existence. When mature, these parasites give rise to cysts that contain numerous flagellated, small, free-swimming stages that can initiate new infections. Control of Amyloodinium is challenging, and the prognosis is guarded. Copper sulfate is the only therapeutic option for food animals in the USA, and repeated treatments are necessary to break the life cycle. The disease is particularly problematic in clown fish. The treatment of choice in ornamental fish is chloroquine, delivered at 10 mg/L as an indefinite bath.
Internal Protozoan Parasites
Hexamita and Spironucleus spp are common, small (∼9 μm), bilaterally symmetric, flagellated (4 pairs) protozoa most frequently found in the intestinal tract of finfish. Among ornamental fish, the cichlids are highly susceptible. Pathogenicity of these organisms is variable and correlated with the number present. If there is a loss of condition, or more than 15 organisms are seen per low-power field on wet mounts of intestinal tissue or contents, then treatment is strongly recommended. The only treatment available for hexamitiasis is metronidazole (use only in ornamental species), which should be given orally but can be administered as a bath if fish are anorectic. Chronic infections have been seen in fish maintained in unsanitary or crowded conditions.
Cryptobia and Trypanosoma spp are slender, elongated (6–20 μm), actively motile, biflagellated protozoa that are easily detected in fresh blood and tissue smears of both marine and freshwater finfish. Hematogenous forms are generally described as Trypanosoma and have a well-developed undulating membrane. Trypanosomes may be transmitted by leeches and have been associated with anemia in blue-eyed plecostomus imported from South America. Cryptobia iubilans has been associated with granulomatous disease in African cichlids and discus. Clinical disease is manifest by severe weight loss and cachexia. Clinically affected fish should be culled. Presumptive diagnosis can be made from microscopic examination of fresh tissue. Typically, granulomas will be found in the stomach, which may be visibly thickened. Acid-fast material will not be found in granulomas caused by Cryptobia. Motile flagellates may be visible using magnification of 400× or greater.
Coccidiosis, while common in freshwater or marine finfish, is rarely diagnosed in live fish. Many species of finfish are affected. The life cycles of many fish coccidia are unknown, and some involve >1 host to complete their development. In addition to intestinal infection, the internal organs may be affected; sporulated Eimeria-like oocysts and sexual and asexual stages can be found in direct smears and histologic sections of the internal organs. Sulfamethazine, at 22–24 g/100 kg of fish wt/day in the feed for 50 days at 50°F (10°C), is used to treat food fish (21-day withdrawal time) in some countries. An FDA-approved form of this drug is not currently available in the USA. For aquarium fish, 10 ppm sulfamethazine in the aquarium water once a week for 2–3 wk has been reported to be preventive, but safety and efficacy data are sparse.
Myxosporidians are common fish parasites. Myxosporidia have indirect life cycles and use other aquatic organisms (eg, annelids) as intermediate hosts. Hence, myxosporidian infections are more common in, and more pathogenic for, wild fish or fish reared intensively in outdoor fish ponds. The organisms tend to be host- and tissue-specific. Accordingly, expression of the disease is related to the specific pathogen and host. Myxosporidian-infected fish in captive display aquaria are not able to transmit the infection unless the necessary intermediate hosts are present.
There are two important myxosporidian infections of ornamental fish. “Renal dropsy of goldfish” is caused by the myxosporidian Sphaerospora auratus. The disease is characterized by renal degeneration and ascites and is usually diagnosed by identification of spores in histologic sections of the kidney. Affected fish present with extreme abdominal distention but may have few other clinical signs. Radiographs may reveal a mass in the area of the posterior kidney; definitive diagnosis is made at necropsy and confirmed histologically. No practical treatment is available. Henneguya, a myxosporidian occasionally found in ornamental fish, causes white nodular lesions that are usually found in gill tissue and may be grossly visible. Henneguya is easily identified by the forked-tail appendage of the spore, seen microscopically. If ponds are dried and limed heavily, infection can be eliminated, apparently by reduction of the intermediate hosts. Aquarium infection can be self-limiting in the absence of intermediate hosts. Although an occasional cyst may be considered an incidental finding, severe damage has been associated with diffuse distribution of interlamellar cysts.
Myxosporidian diseases significant in aquaculture include whirling disease and proliferative kidney disease of salmonids and proliferative gill disease (“hamburger gill disease”) of channel catfish. Whirling disease is caused by Myxobolus cerebralis. Fish are infected as fingerlings when the parasite infects cartilage in the vertebral column and skull, resulting in visible skeletal deformities. Affected fingerlings typically show rapid tail-chasing behavior (whirling) when startled. The disease is also sometimes called “blacktail” because the peduncle and tail may darken significantly. Recovered fish remain carriers. Adults do not show behavioral signs, but skeletal deformities associated with infection do not resolve. The disease can be prevented by purchasing uninfected breeding stock and maintaining them in an environment free of the intermediate hosts (tubifex worms). A presumptive diagnosis of whirling disease is made by detection of spores from skulls of infected fish. Diagnosis may be confirmed histologically or serologically. Whirling disease is of regulatory concern in some states.
Proliferative kidney disease (PKD) is one of the most economically important diseases affecting salmonid industries of North America and Europe. Rainbow trout are particularly susceptible. PKD is caused by Tetracapsuloides bryosalmonae, a myxosporidian with 4 distinct polar capsules. It occurs most commonly in the summer when water temperatures are >12°C, and the parasite primarily infects yearling and younger fish. Clinical signs include lethargy, darkening, and fluid accumulation indicated by exophthalmos, ascites, and lateral body swelling. Infected fish are frequently anemic, resulting in gill pallor. Grossly, the posterior kidney appears gray, mottled, and significantly enlarged. Presumptive diagnosis can be based on observation of suspect organisms, 10–20 μm in diameter, in Giemsa-stained wet mounts of kidney tissue. Histologic examination of infected tissue, stained with H&E, and immunohistochemistry are required for confirmation. There is no treatment, but fish that recover from the infection are resistant to subsequent outbreaks. Infected stocks in nonendemic areas should be depopulated, the premises sanitized, and disease-free stock obtained for replacement. Avoidance is the best preventive measure.
Proliferative gill disease (“hamburger gill disease”) is a myxosporidian infection of channel catfish caused by Aurantiactinomyxon ictaluri. The organism has a complex life cycle, with the oligochete worm Dero digitata serving as the intermediate host. Channel catfish may be aberrant host for A ictaluri, and the disease usually occurs in new ponds or previously infected ponds that have been drained and refilled. Although proliferative gill disease can cause catastrophic mortality approaching 100%, losses may be as low as 1%. Disease occurs at water temperatures of 16–26°C, and mortality is exacerbated by poor water quality, particularly low dissolved oxygen or high levels of un-ionized ammonia. Gills of affected fish are severely swollen and bloody. A presumptive diagnosis can be made from a wet mount of infected tissue, in which filaments appear swollen, clubbed, and broken. Cartilaginous necrosis is strongly supportive of a diagnosis of proliferative gill disease; however, histology is required for confirmation.
Microsporidians are tiny, intracellular, spore-forming organisms with single polar filaments that are common parasites of finfish. They are host- and tissue-specific and can also infect helminth parasites of fish. The spores are extremely resistant, and microsporidian diseases are considered non-treatable. Microsporidians have a direct life cycle; therefore, horizontal transmission in an aquarium is likely. Depopulation and disinfection are recommended for elimination of microsporidian infections.
Pleistophora ovariae infects ovarian tissue of golden shiners (bait fish), resulting in sterility. The organism has no intermediate host and is transmitted horizontally (through ingestion of infective spores) or vertically (through infected ova). Fertility declines as fish age, eventually resulting in sterility. Grossly, infected ovarian tissue appears marbled. The diagnosis is confirmed by examination of a wet mount of suspect tissue, revealing the presence of microsporidian spores.
Neon tetra disease is caused by Pleistophora hyphessobryconis, which infects the skeletal musculature of a number of species of aquarium fish, including tetras, angelfish, rasporas, and barbs. Infected fish may exhibit abnormal locomotion caused by muscle damage, and muscle tissue may appear marbled or necrotic at necropsy. The parasitic spores are readily visualized in wet mounts of infected tissue.
Helminths are common in both wild and cultured fish (see Helminth Parasites of Fish). Fish frequently serve as intermediate or transport hosts for larval parasites of many animals, including humans. Helminths with direct life cycles are most important in dense populations, and heavy parasite burdens are sometimes found. In general, heavy parasite burdens seem to be more common in fish originating from wild sources.
|PrintOpen table in new window
Monogeneans have direct life cycles and are common, highly pathogenic, and obligatory parasites of the skin and gills. Freshwater parasites tend to be ∼0.1–0.8 mm long and are best seen microscopically; however, several important species parasitizing marine fish are significantly larger and may be visible grossly. The worms can be identified by their characteristic hold-fast organ, the haptor, which is armed with large and small hooks. Aquarium and cultured fish are subject to a rapid buildup of parasites by continuous infection and worm transfer to other fish in the tank or pond. Although many species are host-specific, the more common types seen in aquaria are less selective. The 2 most common genera in freshwater aquaria are Gyrodactylus and Dactylogyrus. Gyrodactylus, a common parasite of goldfish, gives birth to live young and is usually found on skin; Dactylogyrus lays eggs and is principally a parasite of the gills. G salaris is a reportable disease of salmonids but has not been reported in the USA (see Helminth Parasites of Fish). Any gyrodactylid found on a salmonid species should be identified well enough to determine whether or not it is G salaris. Neobenedenia and Benedenia are important monogeneans in marine fish. They also attach to skin and gill tissue, although Neobenedenia may also be found on the cornea. Both of these species lay sticky eggs that are easily transmitted via fomite. Monogenean-infected fish may show behavioral signs of irritation, including flashing and rubbing the sides of their bodies against objects in the aquarium. Fish become pale as colors fade. They breathe rapidly and distend their gill covers, exposing swollen, pale gills. Localized skin lesions appear with scattered hemorrhages and ulcerations. Ulceration of the cornea may become evident if the eyes are involved. Mortality may be high or chronic.
Praziquantel (2 mg/L, prolonged bath) is the treatment of choice for monogenean infection in freshwater and marine ornamental fish. Formalin is the only treatment option for food fish. Multiple treatments at weekly intervals are recommended for Dactylogyrus because eggs may be resistant to chemical treatment. Organophosphates (0.25 mg/L, prolonged bath) have been used successfully in ornamental fish in the past, but treatment with praziquantel is considered more effective. Organophosphates should be avoided in systems containing elasmobranchs. Monogenes on marine fish can be removed using freshwater dips for 1–5 min, depending on the tolerance of the species; however, eggs will not be damaged or removed. To prevent the disease, introduction of infected fish should be avoided.
Digeneans have complicated life cycles, with several larval stages that infect one or more hosts. With rare exceptions, the first intermediate host is a mollusc, without which the life cycle generally cannot be completed. A diagnosis usually can be established by gross or microscopic examinations that reveal the cercarial, metacercarial, or adult worms in any of the tissues or body cavities of the fish. Fish tend to form pigmented tissue encapsulations that encyst the parasites. Depending on the color of the cysts in the skin, the condition is called black, white, or yellow grub disease. Heavily parasitized fish often are weak, thin, inactive, and feed poorly. Treatment is not recommended.
Pond-reared, juvenile, tropical fish may develop severe gill disease from metacercarial cysts in gill tissue. Although acute death is occasionally seen, infected fish more commonly die during harvest or shipping when they may be exposed to suboptimal dissolved oxygen concentrations. Treatment of infected fish has not been successful; however, prevention of the disease by elimination of the intermediate host, a freshwater snail, has been effective.
Bolbophorus confusus is a digenean trematode that causes mortality in channel catfish fingerlings in production ponds in Mississippi, Louisiana, and Alabama. The definitive host of B confusus is the white pelican, and the first intermediate host is the ram's horn snail (Heliosoma spp). Cercariae released from snails encyst in fish tissue, forming metacercariae in any tissue, but the majority are found in skin and skeletal muscle of the peduncle of juvenile channel catfish. Severe disease (mortality up to 95%) occurs when metacercariae encyst in visceral organs, particularly the posterior kidney and liver. Involvement of these organs can result in a presentation similar to enteric septicemia or channel virus disease, characterized by fluid accumulation in the abdomen and exophthalmia. Skin and muscle lesions typically result in raised bumps that are white to reddish in color. The presence of digenea in skeletal muscle can result in condemnation of affected carcasses by processing plants.
Both larval and adult cestodes are common in fish. Larval forms encyst in visceral organs and muscle, while adults usually are found in the intestinal tract. Aquatic Crustacea are the most common intermediate host for fish; accordingly, wild and cultured pond fish may be heavily infected. Diphyllobothrium latum, the broad fish tapeworm infection of humans, is acquired by eating larval tapeworms in the flesh of food fish. Aquarium fish may be purchased with heavy cestode infections but have limited exposure once in the aquarium (unless fed infected intermediate hosts). There is no safe, effective treatment for larval tapeworm infections. The Asian tapeworm, Bothriocephalus acheilognathus, is occasionally seen in carp and aquarium fish. It is usually found in the anterior intestine and may be associated with enteritis and degeneration of the intestinal wall. Praziquantel is the drug of choice for treatment of cestodes in ornamental fish, but it is not approved for any aquatic use and cannot be used in food animals.
Acanthocephalids (thorny-headed worms) are common in wild fish as both larval tissue stages and adult intestinal parasites. They are more common in salmonid and marine fish. Arthropods are the first intermediate host. Adult acanthocephala are easily recognized by their protrusible proboscis, armed with many recurrent hooks.
Nematodes are common in wild fish that are exposed to the intermediate hosts. Fish may be definitive hosts for adult nematodes, or they may act as transport or intermediate hosts for larval nematode forms (anisakids, eustrongylids, and others) that infect higher vertebrate predators, including humans. Encysted or free nematodes can be found in almost any tissue or body cavity of fish. Aquarium and cultured pond fish may be heavily infected if crustacean intermediate hosts are present. Cyclops and Daphnia spp are common intermediate hosts for Philometra sp, a nematode that is pathogenic for guppies and other aquarium fish. These blood-red worms can be seen in the swollen abdominal cavity and protruding from the anus of affected fish (red worm disease). Capillaria spp are commonly found in aquarium fish, particularly freshwater angelfish. Heavy infections in juvenile angelfish have been associated with poor growth rates and an inability to withstand shipping and handling. Treatment with fenbendazole (25 mg/kg for 3 days) is recommended, but efficacy has not been firmly established. Levamisole (10 mg/L) administered as a bath treatment for 3 days has also been recommended. Ivermectin is highly toxic to aquarium fish, particularly cichlids, and its use is not recommended.
Leeches are parasitic bloodsuckers of fish and also serve as vectors for blood parasites of fish (eg, Trypanosoma, Cryptobia, and haemogregarines). They can produce a debilitating anemia due to chronic blood loss and disease. Leech infestations are most common in wild fish, but aquarium and pond infestations can occur by introduction of infested fish, plants, etc. Organophosphates (0.25 mg/L, prolonged bath) are effective but not approved for use in food fish. Further, environmental regulations may restrict use in outdoor ponds. Multiple treatments may be required to control leeches because eggs are resilient and juveniles may continue to hatch. Preventive measures include avoiding leeches (ie, effective quarantine). Infestations in recreational fishing ponds are often self-limiting.
Some copepods, such as the anchor worm, are obligatory parasites of finfish during specific stages of their complicated life cycle. They lose their copepod form, including their appendages, and become rod- or sac-like structures specifically adapted for piercing, holding, feeding, and reproducing. Grossly, they appear as barb-like attachments to the skin or gills, where they feed on blood and tissue fluids. They can cause hemorrhage, anemia, and tissue destruction, as well as provide a portal of entry for other pathogens. Many different species of these parasites can be found on freshwater and marine fish. The anchor worms, Laernea spp, are commonly found in a wide variety of aquarium- and pond-reared fish, including goldfish and other cyprinids. Ergasilus spp infest the gills. Organophosphates are effective in controlling copepodid parasites, but legal restrictions constrain clinical use (see Parasiticides). Some success has been achieved in freshwater fish by giving infected fish a 3% (30 ppt = 30 g/L) salt dip (<10 min, remove fish when it rolls) followed by 5 ppt (5 g/L) salt added to the affected tank for 3 wk. The increased salinity kills immature forms as they hatch.
Lice (Branchiuria) are related to the parasitic copepods and have flattened bodies adapted for rapid movement over the skin surface. By means of hooks and suckers, they periodically attach for feeding by inserting the piercing mouth part (stylet) into the skin. Sea lice (Lepeophtheirus salmonis) are a significant disease problem of pen-reared salmonids. Consultation with a salmonid health specialist is suggested if these parasites are encountered, as treatment options are limited and environmental concerns are significant. Argulus spp are lice commonly found on aquarium, pond-reared, and wild freshwater fish. Organophosphates (0.25 ppm, prolonged bath) are used for treating infested aquarium fish but are not approved for use in food fish.
Last full review/revision July 2011 by Ruth Francis-Floyd, DVM, MS, DACZM |
7 Answer Magma is molten rock inside the Earth Lava is molten rock on the surface of the Earth
8 Rock Characteristics Use to tell history, origin of the rock Did it form inside or on the surface of EarthHow long did it take to form?
9 Rock CharacteristicsRock Texture: Relates to size of the mineral crystals. The size of the crystal indicates the amount of time it takes to cool down and form a solid object. The larger the crystals, the longer the time of formationTypes of textures-Coarse Grained: Large crystalsFine Grained : Small crystals. Will usually not be able to see crystals without microscope
10 Other texture types Glassy: No crystals, even under a microscope Indicates very rapid cooling of lava (seconds)Vesicular: Rocks with holesHoles indicate escape of gases during rapid cooling
11 Color (2nd Characteristic) Color indicates composition of the rockLight colored rocks: indicate rocks with high concentrations of Si, O, K and NaWhite, pinks and grayThese rocks have a Felsic compositionFormed within continents
12 Color Dark colored: Color dominated by dark greens and black These rocks have high concentrations or Fe, Mg and CaThese rocks have a mafic compositionThese rocks are formed within oceans
13 Nothing is black or white… A lot of rocks are combinations of felsic and mafic materialsColor will be medium dark (usually dark gray)These have an intermediate composition
16 Density (3rd Characteristic) Mafic rocks are generally more dense than Felsic rocks
17 Rock NamesMatch the name to your rock sample based on your observationsBasalt- fine grained , maficGabbro- coarse grained, maficGranite- coarse grained, felsicObsidian- glassy, maficPumice- vesicular (very small holes), felsicTuff- fine grained, felsicScoria – vesicular (larger holes), mafic
18 Types of volcanic rocks Obsidian: Mafic, Glass, no crystals, instant solidTuff, pumice: Felsic, Tiny crystals, with holes, quick solidificationBasalt: Mafic, tiny crystals, lava solidificationGranite, Felsic, large crystals, magma solidification without exposure to air
19 What do the rocks tell about its history… Basalt, obsidian, pumice, tuff and scoria are examples of extrusive igneous rocks.These rocks were created from the material extruded from an volcanic eruptionIn general, extrusive rocks have no crystals or very small crystals: short cooling time
20 Gabbro,and granite are examples of intrusive igneous rocks Formed when magma slowing cools inside the earth before exposed through erosionFormed when magma is given a chance to slowly cool over millions of years, without being exposed to the surface of the earth
21 Why do intrusive rocks grow such big crystals? Because the rocks in the interior of the Earth are poor conductors of heat, it takes a long time for the hot magma to be cooled. Thus, the intrusive igneous rocks have a coarse-grained texture due to their slow cooling.
Your consent to our cookies if you continue to use this website. |
The Sun Yet Warms His Native Ground
LOST LAND OF THE DODO: An Ecological History of Mauritius, Réunion, and Rodrigues. Anthony Cheke and Julian Hume. 464 pp. Yale University Press, 2008. $55.
Oceanic island chains built by volcanic activity are initially lifeless—dark spots on a lighter sea. But no matter how remote their location, eventually they are colonized. A bird flies in, bringing seeds in its gut or caught in its feathers. Other seeds arrive borne by wind or water. A lizard floats in on driftwood. A seal hauls itself onto the shore. Forests emerge and the islands slowly change color from gray-black to green.
On hundreds of islands, landscapes of rock have come to life in this fashion. The particular trajectories of the biota on any island depend on which species arrived first. In each set of islands, different lineages prosper. The radiation of species is happenstance, following Darwin’s rules, with an infinite variety of possible ends.
When humans arrive on islands, new trajectories ensue (most of them unfortunate). Easter Island was once a subtropical island forested with a variety of tree species, including many palms. When humans first discovered the island remains a subject of debate, with estimates ranging from A.D. 300 to 1200 Yet by the time Europeans arrived in 1722, many of the island’s native species, including nearly all the trees, were apparently extinct. Visitors now find a land of grassy hillsides punctuated by the island’s famous stone heads.
The Galápagos Islands were not discovered until the 16th century. Early visitors included whalers and hunters, who diminished the populations of seals and tortoises but left much of the island life largely unaffected. When Darwin arrived in 1835, the islands were still home to nearly the full variety of endemic species that would have been seen thousands of years earlier. It was in these species that Darwin would, some years later, discern the mechanics of natural selection. In the beaks of finches and mockingbirds, he found the evolutionary consequences of competition for scarce resources. Today, along with salt-spitting marine iguanas and near-tame sea lions, one can still find 29 species of birds (22 of them endemic to the Galápagos), including flightless anhingas and wild hawks unafraid to alight on a visitor’s head. The future of this wonderland is tenuous; invasive species have wreaked havoc on some of the islands, and humans (particularly those engaged in commercial fishing) have done additional damage. And yet the Galápagos remain the best living evidence of the sort of flowering of life that occurs on volcanic islands.
These contrasting stories raise the question of why we have been left so few islands where a great deal of diversity has been preserved, as it has on the Galápagos, and so many islands where diversity has largely been lost, as on Easter Island. However, Easter Island is perhaps not the best place to find the answer to that question. Much of what has been surmised about its decline is disputed. The theory that Easter Islanders brought about the doom of their civilization by shortsightedly cutting down all the trees is undermined, some believe, by recent evidence suggesting that rats brought there by the first settlers (rather than the settlers themselves) caused the deforestation, by eating seeds that would otherwise have become trees.
More is known about what happened on the Mascarene Islands—Mauritius, Réunion and Rodrigues, which lie a thousand kilometers east of Madagascar in the Indian Ocean. Here much of the ecological drama was recorded as it unfolded. And now life on the Mascarenes has been described in great detail by Anthony Cheke and Julian Hume in Lost Land of the Dodo. They tell the story of the islands in three acts—the evolution of the biota, the impact of humans from the 17th century onward, and the future prospects for restoration of the islands’ devastated ecosystem.
Like the Galápagos and Easter Island, Mauritius, Réunion and Rodrigues are volcanic in origin. Seen from above, Réunion looks like a green rock—a moss-covered stone. Farther east is Mauritius, similar in size, and beyond it lies Rodrigues. Expanses of ocean hundreds of kilometers wide separate the three from any other land.
Cheke and Hume describe in great detail the origin, composition, biota, ecology and history of each island. But the most compelling depictions are visual—a series of 39 colorful paintings by Hume. These illustrations show a landscape so inviting that I was inspired to check on the cost of a flight to Mauritius. In one painting giant raven parrots (Lophopsittacus mauritianus) walk among tree roots. Another shows a hillside on the coast of Rodrigues carpeted with giant tortoises, re-creating a scene described by Francois Leguat, who lived on Rodrigues for two years beginning in 1690 and claimed he was able to walk for a hundred paces across the backs of the tortoises. Still others depict Mascarene parrots (Mascarinus mascarinus), a Réunion echo parakeet (Psittacula eques), Dubois’s parrot (described only once, in 1674), Réunion ibises (Threskiornis solitarius), hoopoe starlings (Fregilupus varius), flying foxes (Pteropus niger), a flightless wood-rail (Dryolimnas augusti) riding on a tortoise, and even a dodo (Raphus cucullatus) bending its head to preen.
These paintings alone are worth the price of the book. They are surely among the best re-creations of an island world. But they are just that, re-creations. Almost all of the species in the paintings are extinct. So the images are also simply sad.
After describing the flora and fauna that once flourished on the islands, the authors get down to the business of explaining what became of them. The records Cheke and Hume find are rich and, often, specific. The fall of the most fragile native species was observed and noted. Many losses can even be attributed to a particular cause. The authors reconstruct for the reader—species by species—just what happened. For almost no other islands has it been possible to explain the extinctions in such detail.
Consider what the world’s islands would be like if history had gone differently. The Galápagos have experienced some environmental degradation, but people began conservation efforts there relatively early, and as a consequence many endemic species remain. Had we done the same on other islands, we could have spent generations documenting the life on them. Each island has different species, and so does each archipelago. The differences increase over time, each island drifting in its own direction. To have known well the fauna of islands would have given us the best possible window into the history and possibilities of life.
But our behavior has rarely been benign. Humans arrived in North America and the Pleistocene megafauna went extinct. We arrived in Australia and the same thing happened. We made it to oceanic islands and the waves of extinctions continued. Instead of growing more different with time, the isolated regions of the world have become more similar as a result of extinctions and introduced species.
In the Mascarenes, as elsewhere, the first humans to arrive were at best ignorant, at worst aggressive in their antagonism toward nature. It was realized early that something needed to be done to conserve forests. On Mauritius, the French government began to regulate hunting and forest clearance as early as the late 1700s. Initially conservation projects ignored rare species, but by the early 1800s (when the reality of extinctions was all too apparent), that changed. In theory all that was missing was the realization that nonnative species (including pigs, monkeys, deer and rats) needed to be extirpated rather than encouraged. But the efforts to get rid of them were too little, and even by the 1700s already too late. Some species were saved by the early conservation efforts or survived on smaller islands that were more remote, but just as many were lost.
As for what caused the native species and habitats of the islands to decline, it was rats. But it was also forest clearance and hunting, pigs and monkeys, agriculture and erosion. Just about everything you can imagine played a role. What the Mascarenes have to teach us is that when we recognize that the environment is being degraded, we must act quickly and effectively. Otherwise, the deterioration will continue: Species will disappear, and the forest will dwindle. Our descendants will look back at our well-documented catastrophe and wonder why we dithered.
Lost Land of the Dodo is not a fast read. It is thick with details. Nearly 200 pages are taken up by the endnotes and references. But it is the details that make it a necessary read. Here is the story of what can happen to an island when we ignore the consequences of our actions.
Rob Dunn is an assistant professor in the department of biology at North Carolina State University. He is the author of Every Living Thing: Man’s Obsessive Quest to Catalog Life, from Nanobacteria to New Monkeys (HarperCollins, 2008). |
The word ”thymus” comes from the Greek word ”thymos” which means ”soul,” ”heart,” ”life,” and ”desire.”
It is situated between the lungs, pericardium of the heart, in front of the aorta, behind the breastbone, and below the thyroid.
This gland is divided into 2 lobes that lie on either side of the midline of the body, and into lobules (smaller subdivisions). It has 3 main layers:
- the capsule is the thin covering over the outside of this gland;
- the cortex is the layer which surrounds the medulla;
- the medulla is the inside part of this gland.
The medulla and cortex are made up of a mixture of lymphocytes (a type of white blood cells) and squamous cells (epithelial cells).
The arterial supply to this gland is via small branches of the internal thoracic arteries and the anterior intercostal arteries.
It serves a vital role in the development and training of T-lymphocytes (T cells), a vital type of white blood cells. T-cells are white blood cells which protect against foreign organisms (like – bacteria, fungi, and viruses) which have managed to infect body cells. The production of T-lymphocytes from this gland starts in embryonic life, around the 8th week of gestation.
Despite its important role in the immune health, the gland is not active during our entire lifetime. For instance, it is relatively large in infancy (at birth it measures about 4 cm in breadth, 5 cm in length, and 6 mm in thickness), reaches a maximal weight in adolescence between 12 and 19 years, and gradually involutes with age, with a progressive fatty replacement of the cellular components. Regression of the gland is related to hair loss in adults.
This type of cancer is very rare, accounting for about 0.2 to 1.5 percent of all malignancies. More than 90% of tumors which develop in this gland are thymomas – tumors that start in the cells which line the outside of the gland and tend to grow slowly. It is very rare for them to spread outside of this gland. Common symptoms include:
- chest pain;
- difficulty swallowing.
Note – about half of these tumors are detected on a plain chest x-ray performed for other reasons and most sufferers have no symptoms.
It is a long-term condition which causes weakness in the voluntary muscles. It is a relatively rare condition which affects around 18 out of every 100,000 Americans.
Severe combined immunodeficiency develops when an individual carries a mutation in a gene which regulates development of T cells.
Congenital thymic hypoplasia, also referred as DiGeorge syndrome, is a rare condition in which a missing portion of chromosome 22 causes a child to be born with an underdeveloped thymus or none at all. Common symptoms may include:
- behavior problems;
- learning delays;
- nasal-sounding speech;
- delays in rolling over;
- poor muscle tone;
- breathing problems;
- gastrointestinal problems;
- delayed growth;
- a gap in the roof of the mouth;
- wide-set eyes;
- an underdeveloped chin;
- frequent infections;
- a heart murmur.
This is a rare, benign tumor which contains thymic and adipose tissues. It is commonly asymptomatic, however, some people experience chest pain, shortness of breath, or a cough.
It is a butterfly-shaped gland located just below Adam’s apple and in the front of the neck. The two lobes on either side of the windpipe are joined together by a bridge (isthmus), that crosses over the front of a cartilaginous tube which connects the larynx and pharynx to the lungs, called the windpipe.
During development, this gland initially forms in the floor of the primitive pharynx. It descends down the neck to lie in its adult anatomical position.
Its size can vary based on each individual’s size as well as iodine intake, weighing 15 to 20 grams.
The gland nerve supply comes from the superior, middle, and inferior cervical sympathetic ganglia. It gets its blood supply from the inferior and superior thyroid arteries.
It lowers the amount of the active hormone called triiodothyronine and secretes thyroxine (a relatively inactive prohormone). These 2 hormones are called the thyroid hormones. In addition, it produces thyrocalcitonin, a hormone that helps regulate the levels of calcium in the blood.
The thyroid’s hormones regulate important body functions, including:
- cholesterol levels;
- body temperature;
- heart rate;
- menstrual cycles;
- muscle strength;
- body weight;
- peripheral and central nervous systems.
Note – it cannot produce hormones on its own, but requires the assistance of the pituitary gland (hypophysis), that creates TSH – thyroid stimulating hormone.
It is an inflammation of this gland and is commonly caused due to an autoimmune condition or a viral infection. It can have no symptoms or be painful.
It is the inadequate production of thyroid hormone. In the US, this condition affects about 4.6% of people 12 years old and older. Common symptoms may include:
- loss of hair;
- elevated cholesterol levels;
- muscle and joint pain;
- facial swelling;
- chronic constipation;
- unexplained weight gain.
It is a condition in which there is an excessive amount of thyroid hormones. It affects approximately 70% of people with an overactive thyroid. Graves’ disease (also referred as toxic diffuse goiter) is the most frequent cause of hyperthyroidism.
Nodules develop in this gland and start to secrete thyroid hormones, disrupting the human body chemical balance. This condition can be managed by watchful waiting or surgery.
It is a disease which occurs when abnormal cells start to grow in this gland. Each year, there are more than 56,000 new people with this type of cancer in the United States. As this cancer grows, it may cause:
- swollen lymph nodes in the neck;
- pain in the throat and neck;
- difficulty swallowing;
- changes to the voice;
- a lump which can be felt through the skin on the neck.
Thymus vs Thyroid – Differences
The thymus is a gland in the chest and is at its largest in adolescence, then it gradually shrinks away throughout adulthood. As an important part of the lymphatic system, this gland produces white blood cells called T-cells, that help the human body to fight infection.
The thyroid is a butterfly-shaped gland which sits low on the front of the neck. This gland helps coordinate the creation and use of energy. |
In this lesson, students think about what might happen to plants and animals if their environment changed and they were faced with conditions to which they were not well adapted. First, students read The Great Kapok Tree: A Tale of the Amazon Rain Forest by Lynne Cherry. Then they watch a video about camouflage and learn that praying mantises are well suited for life in the rain forest. Next, students play a predator/prey game to simulate what might happen to the praying mantis if the rain forest were cut down. Finally, they use a Web activity to explore what would happen to living things if the concentration of oxygen in the air changed.
- Understand the interrelationship between organisms (plants and animals) and their environment
- Understand that when environmental conditions change, some plants and animals survive and reproduce, while others die or move to new locations
- Observe ways in which changes in environmental conditions affect the organisms living in that environment
Grade Level: 3-5
- Two 45-minute blocks
- "The Great Kapok Tree: A Tale of the Amazon Rain Forest" by Lynne Cherry
- Handout: Atmospheric Oxygen Web Activity Worksheet PDF Document
- green print fabric (3' x 3')
- brown fabric (3' x 3')
- 100 one-inch squares of green construction paper
Before the Lesson
- Cut out the squares of green construction paper.
- Make a copy of the handout for each student.
Organisms can survive only in environments that meet their needs. The earth has many different environments, or biomes, and each has unique environmental conditions. These conditions, which include temperature, rainfall, soil quality, salinity, pH, and predators, present challenges to the living things born into that environment. Organisms have evolved features (structures and behaviors) that make them well adapted to tackle the challenges of the environment they live in. Changes in an organism's environment may result in death, migration, or survival of a few well-adapted individuals in the population.
1. Read aloud or have students read "The Great Kapok Tree: A Tale of the Amazon Rain Forest" by Lynne Cherry. Discuss the following:
- What impact will chopping down the rain forest have on the animals, the soil, and on humans?
2. Show students the Evolution of Camouflage video and discuss the following:
- The praying mantis is well disguised for life in a tropical tree. What might happen to praying mantises if all those trees were cut down?
3. Place a large piece of green print fabric on a table. Scatter the green paper squares randomly across the fabric. Have students gather around the table and stand with their backs to the fabric. Tell them not to turn around yet.
Explain to the class that the green print fabric represents the trees in the rain forest, the green squares represent praying mantises, and students are birds that feed on these insects. When you give the signal, students will turn around quickly and grab as many bugs as they can. They will get only five seconds to feed. When you say stop, students must turn back around. Give the signal when you and students are ready.
4. After students feed, have them count how many praying mantises are left on the fabric.
5. Replace the green fabric with the brown fabric and scatter the green paper squares across the fabric again. Explain to students that the brown fabric represents the rain forest after all the trees have been cut down. Have students repeat the feeding experiment.
6. After students feed, have them count how many praying mantises are left on the table this time. Discuss the results of the experiment.
- What impact did changing the fabric color have on the number of green squares left on the table?
- How does this model illustrate the impact that cutting down trees has on the praying mantis population?
- What factors might be even more critical to mantis survival than lack of camouflage? (Example: food supply, shelter)
- If trees were cut down would all the praying mantis die? What features might help some of the remaining mantises survive in a treeless environment? (Example: a brownish coloring)
7. After students feed, have them count how many praying mantises are left on the table this time. Discuss the results of the experiment.
- What might happen to plants and animals if the amount of oxygen in the air changed?
8. Distribute a copy of the Handout: Atmospheric Oxygen Web Activity Worksheet (PDF) worksheet to each student. Have them answer the first two questions.
9. Have students conduct the Atmospheric Oxygen activity and answer the remaining questions on the handout. Then discuss their answers to the questions.
10. For homework, have students explain to their families why it's so important to preserve the tropical rain forest. Ask them to brainstorm things they as a family can do to help. (To encourage students to follow through, you might require them to bring in their brainstormed list signed by all family members participating in the activity.) |
Ever wondered who invented Kevlar? It’s a pretty niche thing to ponder on at any given time but luckily I have the answer for you!
Stephanie Kwolek invented Kevlar.
- Stephanie was born in 1924.
- She was a Chemist.
- She discovered the correct fibers that led to the creation of Kevlar when she was in her 40’s.
- She was inducted into the National Inventors Hall of Fame in 1994.
- She was inducted into the National Women’s Hall of Fame in 2003.
- Stephanie passed away in 2014, at the age of 90.
I’m not going to go in depth into the science but here is an easy to digest summary:
“Unexpectedly, she discovered that under certain conditions, large numbers of polyamide molecules line up in parallel to form cloudy liquid crystalline solutions. Most researchers would have rejected the solution because it was fluid and cloudy rather than viscous and clear. But Kwolek took a chance and spun the solution into fibers more strong and stiff than had ever been created. This breakthrough opened up the possibilities for a host of new products resistant to tears, bullets, extreme temperatures, and other conditions.” (source)
In one of her interviews straight out of College Stephanie was tenacious enough to ask for the outcome of her interview on the spot. She attributes this boldness to the boss of the company bringing his secretary in and dictating a letter of offer on the spot to her (source). You don’t get if you don’t ask!!
The thing I love the most about this story is that Stephanie tested a product that most other scientists would have discarded because of the texture and style of it. Today the world of the academic and researcher (including scientists) is fraught with risk. You can’t take yourself too far outside of the norm because then you will be contradicting or challenging those who are responsible for your funding, or your position at a university.
Unfortunately this leads to our research walking around in circles at times, and an inability for the great “leaps” in progress that science, philosophy and intellectual thought was once famous for.
So this idea of just “giving something a go” to test how it would react, and the subsequent incredible outcome makes me so happy!
I am sure you have heard of Kevlar. But just in case you weren’t aware of some of these uses here is a quick list:
- Body armour
- Coating on optical-fiber cables that run between countries under the sea
- Ropes (the strength of the fiber can hold a bridge)
- Clothing for athletes
- Kitchenware (fry pans etc)
- And on and on (source).
The hard work Stephanie dedicated her life to has improved our lives in ways we now take for granted.
But Kevlar of course has also led to the protection of our service men and women in both military and police roles. I am sure many pacifist arguments would be against this and would not consider it a step forward. I get that, and whilst it is a complicated story bigger than this blog can afford, I for one have quite a positive view of police in my country (being very lucky to have had nothing but positive experiences with them as afforded by my race, social status and circumstance) and am grateful that they are given more protection so they can continue to keep me safe.
Thank you to the following sites: |
Icebergs collect mini ecosystems, lock up carbon
Icebergs, released by global warming from the icy embrace of Antarctica, have surprised scientists by playing host to many forms of life.
According to new research published in the journal Science, the bergs also act as floating carbon sinks, net accumulators of carbon dioxide.
Now drifting through the Weddell sea, the bergs are "hotspots" for ocean life thanks to trapped "terrestrial material" they have carried with them from the continent. The researchers estimate that the bergs are increasing the biological activity in as much as 40 per cent of the Weddell sea.
As the icebergs melt, they release their earthy cargo far out at sea, creating a habitable zone of up to two miles radius around each berg. In this region, phytoplankton, krill, and fish all do well below the waterline. Attracted by all this food, populations of seabirds are thriving on the icebergs, apparently using them as temporary cruise liners.
"One important consequence of the increased biological productivity is that free-floating icebergs can serve as a route for carbon dioxide drawdown and sequestration of particulate carbon as it sinks into the deep sea," said oceanographer Ken Smith of the Monterey Bay Aquarium Research Institute (MBARI), first author and principal investigator for the research.
"While the melting of Antarctic ice shelves is contributing to rising sea levels and other climate change dynamics in complex ways, this additional role of removing carbon from the atmosphere may have implications for global climate models that need to be further studied," Smith added.
Smith's team carried out an astonishingly detailed and close-up study of the icebergs. They drew on satellite data from NASA to select their subjects, which they tracked in person from the research vessel Laurence M Gould. They also used a remotely operated vehicle (ROV) to explore the submerged sections of the floating ice mountains.
Bruce Robison, an oceanographer and ROV pilot said: "We flew the ROV into underwater caves and to the undersides of the icebergs, identifying and counting animals with its colour video camera, collecting samples, and surveying its topography."
Researcher John Helly, of the San Diego Supercomputer Centre (SDSC) at UC San Diego, concluded: "The whole is definitely greater than the sum of the parts." ® |
As frustrating as it may be at times, your preschooler’s display of strong emotions is developmentally right on target. Look on the bright side: Fierce, angry determination to wear that ready-to-be-laundered shirt can suddenly shift into joyful laughter at his new puppy’s antics. Your youngster is developing the cognitive awareness to recognize and label his and others’ feelings, so assist him in understanding emotions through a variety of painting activities.
Looking into a mirror and painting what she sees will put your preschooler in touch with what she looks like when she’s mad, sad or glad. Get out the paper, brushes and paints and give her an opportunity to put on paper what she sees reflected in a mirror. Ask questions such as, “What is your glad face?” or “How do you look when you are mad?” Let her glare or smile into the mirror and record what she sees with paint. Help her capture the details by pointing out, for example, what her eyebrows are doing when she changes her expression. She can label, with your assistance, each self-portrait she paints with its corresponding feeling.
Set out paper, brushes and a selection of primary colors – blue, green, yellow and red – in clear plastic cups. Talk with your preschooler about the emotions he associates with things that are colored blue, green, yellow and red. If, for example, he associates red with fire trucks, he might connect them with feelings of excitement or fear. If he associates yellow with the sun, he might connect it to feelings of pleasure or happiness. Have him take the colors, one at a time, and paint his expression of each color’s feeling. Encourage him to paint with strokes that add further expression to each color's feeling -- long swooshes of color for happiness or little dots of color for anger, for example.
Your preschooler is making symbolic associations every time she sees a picture of a cake and thinks of birthdays or thinks of family beach vacations when she sees a bucket and shovel. Use a picture book to investigate what symbols your youngster associates with certain emotions. Refer to a page with a selection of images on it, and ask her questions such as “Which one is happy?” or “Where do you see sadness?” Let her paint the images that symbolically connect her to specific feelings. If, for example, a house stands for happiness to her, let her create a scene that features her happy house at its center.
Developmentally, preschoolers are beginning to understand and empathize with the feelings of others. Assist your youngster in this growth by providing him an opportunity to paint the feelings of his favorite movie, web series or TV character. Let him watch a snippet of the show, prompting him to notice the character’s feelings. Have him label and paint his own rendition of that character's emotion, encouraging him to choose facial expressions and colors that signify the feeling he most associates with his favorite character. |
Sensory details – a description of something using one of the 5 sense
Class begins with a group reading of a Halloween Experience essay. Each student will be given a copy and asked to read along. Teacher will ask students to:
make sparks on any connections
highlight any sensory details or action verbs
Teacher will read essay aloud. After reading, students will be first allowed to volunteer any sparks they wrote. Next, students will be called on to share the action verbs and sensory details they found.
Students will write journal entries, choosing one of the two topics:
Think of a time you had a happy or fun experience that relates to Halloween.
Think of a time you had a disastrous experience that relates to Halloween.
Students will write their experiences down in their journals. Focus should be on personal experience, using sensory details and action verbs to “show, don't tell.” These should be narratives, in first person.
Students will be asked to THINK/PAIR/SHARE after their journal entries – first to share them with groups, and then volunteers to share with the class. Students should also take notes during this part of the activity, on any details/ideas they might be able to incorporate in to their own written accounts.
Teacher will pass out 3 example articles about banning Halloween in various communities. The first will be read aloud by teacher, with the class reading along and highlighting anything they agree with or object to.
For the 2nd essay, teacher will once again read the essay aloud. This time, students will be asked to discuss in small groups anything mentioned in the article that they agree or disagree with. For the last essay, teacher will read aloud. Students will be asked to write a short paragraph on anything mentioned in the article that they agree or disagree with
Next, teacher will go over persuasive writing essay rubric with class. Students will be called on to read small sections aloud, with a quick summary by the teacher at the end. Students will then be given a model essay which they will read individually and grade with the given rubric. Teacher will call on students to share the scores they gave the essay, with reasons.
Homework: Students will be asked to pretend that their city is planning to ban Halloween this year. They will write a journal entry with 2 columns, 1 for reasons for banning Halloween and 1 for reasons for keeping Halloween. They must have at least 2 entries on each side.
Students will begin by sharing their homework in small groups.
After the activity, students will begin rough drafts of a letter in response to the following prompt:
The local county School Board is planning on banning Halloween in all public schools this year. Write a letter to the the board expressing your support of t or your opposition of their choice.
Students should use at least 2 examples from the column that they are choosing to write their letter on. They should also address a potential objection based on 1 example from their homework on the side they are choosing not to write on. Remind students to use personal examples and support them.
Remind students that their persuasive letters will be graded the same way they graded earlier essays. |
In the popular understanding of biology, genes are indisputably in charge, dictating most aspects of how living things look and act. But it turns out that genetics is controlled by a higher power still: epigenetics, which determines whether a gene is active or not, and thus whether it can have any effect on an organism. Epigenetic changes are what differentiates a heart cell from a brain cell by turning some genes off and leaving others on. Sometimes, epigenetic changes can be a cause of human diseases and cancer progression. Through epigenetics, a person’s diet can even affect her granddaughter’s lifespan. Hidetoshi Saze and the Plant Epigenetics Unit he heads study how epigenetic changes come about, a question with wide-ranging implications for both plants and animals.
Epigenetics refers to a variety of chemical changes to DNA and its attached proteins that do not change the underlying genetic code, but can temporarily keep genes from being used by the cell. If genes were books in a library, epigenetic controls would be librarians who can either lock the books in a musty back room or allow them to circulate. Epigenetic changes help organisms adapt to their environments by using genetic information only when they need it. However, those modifications occasionally go awry, as, for example, when a cell attaches methyl molecules to a vast range of genes that are needed for normal growth and development, effectively locking them up. “I’d like to know why such spontaneous epigenetic changes occur,” says Prof. Saze.
To find out, the Plant Epigenetics Unit searches for mutants of the plant variety arabidopsis that show tell-tale signs of indiscriminate methylation, such as deformed leaves or reduced fertility. They then sequence the mutants’ genomes and compare them to a reference genome in order to find the genes responsible for the effects. One finding so far has been of a gene for an enzyme that removes methyl groups; when the gene is mutated, the enzyme doesn’t work, and the methyl groups stay on.
“Amazingly, many organisms, including plants and mammals, use very similar enzymes and chemical modifications of proteins for epigenetic regulation of gene activity,” says Prof. Saze. Thus, his work will shed light on epigenetic controls not only in arabidopsis, but in a host of other organisms as well. We’ll then be one step closer to understanding what controls the controllers. |
What Is Gilbert Syndrome?
Gilbert Syndrome is a common genetic liver disorder found in 3-12% of the population.
It produces elevated levels of unconjugated bilirubin in the bloodstream.
However, it normally has no serious consequences, although mild jaundice may appear under conditions of exertion or stress.
Cause Of Gilbert’s Syndrome:
Gilbert is caused due to presence of an abnormal gene, one inherits from their parents.
The gene in question normally controls an enzyme that helps break down bilirubin in the liver.
With an ineffective gene, excess amounts of bilirubin build in the bloodstream.
The presence of the following factors increases the likelihood of developing Gilbert’s syndrome:
• Both parents carry the abnormal gene that causes the disorder
• Being male
Symptoms Of Gilbert’s Syndrome:
The following signs and symptoms are exhibited by those who suffer from Gilbert’s Syndrome:
• feeling tired all the time (fatigue),
• difficulty maintaining concentration,
• unusual patterns of anxiety,
• loss of appetite,
• abdominal pain,
• itching (with no rash),
Diagnosis Of Gilbert’s Syndrome:
After the initial procedures of taking a medical history and physical examination, a doctor may confirm diagnosis of Gilbert’s Syndrome via the following tests:
• Complete blood count
• Liver function tests
Treatment Of Gilbert’s Syndrome:
Gilbert’s Syndrome itself does not require any treatment.
Where jaundice is an issue, enzyme inductors such as Carbamazepine and Phenobarbital can reduce unconjugated bilirubin levels and may relieve other associated symptoms of Gilbert’s Syndrome.
Furthermore, it can be minimized by slightly modifying the person’s diet
By : Natural Health News |
A construction crew paints a white roof in downtown Washington, D.C.
Click on image for full size
Courtesy of Maria Jose-Vinas, American Geophysical Union
White Roofs May Successfully Cool Cities
News story originally written on January 28, 2010
Painting roofs white can cool cities. That’s what scientists discovered in a new study that used a computer model to examine how white roofs affect temperature.
"Our research demonstrates that white roofs, at least in theory, can be an effective method for reducing urban heat," says scientist Keith Oleson.
Cities are affected more by global warming than rural areas. Roads, dark roofs and other surfaces in cities absorb heat from the Sun. This creates an urban heat island effect that can raise temperatures 2-5 degrees Fahrenheit (about 1-3 degrees Celsius) or more, compared to rural areas.
White roofs would reflect some of that heat back into space and cool temperatures, much like wearing a white shirt on a sunny day can be cooler than wearing a dark shirt.
The study team used a newly developed computer model to simulate the amount of solar radiation that is absorbed or reflected in cities. The model results indicate that, if every roof were painted white, the urban heat island effect could be reduced by a third.
This would cool the world's cities an average of about 0.7 F. There would be more cooling during the day, especially in summer. Cities in different areas of the world would have different amounts of cooling. New York City, for example, would cool in summer afternoons by almost 2 degrees Fahrenheit.
In the real world, the cooling impact might be somewhat less because it’s hard to keep a roof looking white. Over time the white paint may darken with dust and decay. Some parts of roofs, such as vents, can’t be painted white.
White roofs would also cool temperatures inside buildings. This would have an impact on the amount of energy used to heat and air condition the space. Since most of this energy usually comes from fossil fuels, which release heat-trapping greenhouse gases into the atmosphere, white roofs could affect the amount of global warming too.
Shop Windows to the Universe Science Store!
Our online store
includes fun classroom activities
for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist
are also full of classroom activities on different topics in Earth and space science!
You might also be interested in:
The air in urban areas can be 2 - 5°C (3.6 - 9°F) warmer than nearby rural areas. This is known as the urban heat island effect. It’s most noticeable when there is little wind. An urban heat island can...more
Even though only a tiny amount of the gases in Earth’s atmosphere are greenhouse gases, they have a huge effect on climate. There are several different types of greenhouse gases. The major ones are carbon...more
Earth’s climate is warming. During the 20th Century Earth’s average temperature rose 0.6° Celsius (1.1°F). Scientists are finding that the change in temperature has been causing other aspects of our planet...more
Scientists have learned that Mount Hood, Oregon's tallest mountain, has erupted in the past due to the mixing of two different types of magma. "The data will help give us a better road map to what a future...more
The Earth's mantle is a rocky, solid shell that is between the Earth's crust and the outer core, and makes up about 84 percent of the Earth's volume. The mantle is made up of many distinct portions or...more
Some geologic faults that appear strong and stable, slip and slide like weak faults, causing earthquakes. Scientists have been looking at one of these faults in a new way to figure out why. In theory,...more
The sun goes through cycles that last approximately 11 years. These solar cycle include phases with more magnetic activity, sunspots, and solar flares. They also include phases with less activity. The...more |
Earth's Meteorological Monsters
Part A: 2005 Hurricane Season
Click on the picture below to watch a NASA video of the 2005 Atlantic Hurricane Season.
Sea Surface Temperature
Questions and Discussion
After everyone has seen the video, you'll hold a class discussion about what you've learned from the visualization and what questions you had about what you saw. To get ready for the discussion, respond to these questions on your Activity Sheet.
Stop and Think
1: Generalizing from the information in this video, describe where most hurricanes form and how they move across the Atlantic Ocean basin. Does there seem to be anything particular about those places that helps hurricanes to form?
2: Come up with four questions about things from the video that you didn't understand.
Now, watch the video again. This time, focus on the path that each hurricane follows. Use the image on the right to help you identify latitude ranges.
Checking InAnswer the following questions to check your understanding of the information presented in the video.
- What direction are most hurricanes traveling between 10° and 20° N latitude? Most hurricanes in this latitude range are traveling westward.
- At what latitude do many Atlantic hurricanes generally begin moving eastward? Around 30° N latitude |
The Balkans is an area of southeastern Europe situated at a major crossroads between mainland Europe and the Near East. The distinct identity and fragmentation of the Balkans owes much to its common and often violent history and to its very mountainous geography.
The "Kurgan hypothesis" of Proto-Indo-European (PIE) origins assumes gradual expansion of the "Kurgan culture", around 5000 BC, until it encompassed the entire pontic steppe. Kurgan IV was identified with the Yamna culture of around 3000 BC.
The other peoples of the Balkans organized themselves in large tribal unions, such as the Thracian Odrysian empire, created in the 5th century BC. Other tribal unions existed in Dacia at least as early as the beginning of the 2nd century BC under King Oroles. The Illyrian tribes were situated in the area corresponding to today's Adriatic coast. The name Illyrii was originally used to refer to a people occupying an area centered on Lake Skadar, situated between Albania and Montenegro (Illyrians proper). However, the term was subsequently used by the Greeks and Romans as a generic name to refer to different peoples within a well defined but much greater area.
The Illyrian king, Bardyllis turned Illyria into a formidable local power in the 4th century BC. The main cities of the Illyrian kingdom were Scodra (present-day Shkodra, Albania) and Rhizon (present-day Risan, Montenegro). In 359 BC, King Perdiccas III of Macedon was killed by attacking Illyrians.
But in 358 BC, Philip II of Macedon, father of Alexander the Great, defeated the Illyrians and assumed control of their territory as far as Lake Ohrid. Alexander himself routed the forces of the Illyrian chieftain Cleitus in 335 BC, and Illyrian tribal leaders and soldiers accompanied Alexander on his conquest of Persia.
After Alexander's death in 323 BC, the Greek tribes started fighting among themselves, while up North, independent Illyrian kingdoms again arose. In 312 BC, King Glaukias seized Epidamnus. By the end of the 3rd century BC, an Illyrian kingdom based in Scodra controlled parts of northern Albania, Montenegro, and Herzegovina. Under Queen Teuta, Illyrians attacked Roman merchant vessels plying the Adriatic Sea and gave Rome an excuse to invade the Balkans. In the Illyrian Wars of 229 BC and 219 BC, Rome overran the Illyrian settlements in the Neretva river valley and suppressed the piracy that had made the Adriatic unsafe. In 180 BC, the Dalmatians declared themselves independent of the Illyrian king Gentius, who kept his capital at Scodra. The Romans defeated Gentius, the last king of Illyria, at Scodra in 168 BC and captured him, bringing him to Rome in 165 BC. Four client-republics were set up, which were in fact ruled by Rome. Later, the region was directly governed by Rome and organized as a province, with Scodra as its capital. Also, in 168 b.c, by taking advantage of the constant Greek civil wars, the Romans defeated Perseus, the last King of Macedonia and with of their allies in Southern Greece, they became lords of the region. The Greek territories were split to Macedonia, Achaia and Epirus.
Starting in the 2nd century BC the rising Roman Empire began annexing the Balkan area, transforming it into one of the Empire's most prosperous and stable regions. To this day, the Roman legacy is clearly visible in the numerous monuments and artifacts scattered throughout the Balkans, and most importantly in the Latin based languages used by almost 25 million people in the area. However, the Roman influence failed to dissolve Greek culture, which gradually acquired a predominant status in the Eastern half of the Empire, more so in the southern half of the Balkans.
Beginning in the 3rd century AD, Rome's frontiers in the Balkans were weakened because of political and economic disorders within the Empire. During this time, the Balkans, especially Illyricum, grew to greater importance. It became one of the Empire's four prefectures, and many warriors, administrators and emperors arose from the region. Many rulers built their residence in this part of the region . Though the situation had stabilized temporarily by the time of Constantine, waves of non-Roman peoples, most prominently the Thervings, Greuthungs and Huns, began to cross into the territory, first (in the case of the Thervingi) as refugees with imperial permission to take shelter from their foes the Huns, then later as invaders. Turning on their hosts after decades of servitude and simmering hostility, Thervingi under Fritigern and later Visigoths under Alaric I eventually conquered and laid waste the entire Balkan region before moving westward to invade Italy itself. By the end of the Empire the region had become a conduit for invaders to move westward, as well as the scene of treaties and complex political maneuvers by Romans, Goths and Huns, all seeking the best advantage for their peoples amid the shifting and disorderly final decades of Roman imperial power.
Christianity first came to the area when Saint Paul and some of his followers traveled in the Balkans passing through Thracian and Greek populated areas. He spread Christianity to the Greeks at Beroia, Thessaloniki, Athens, Corinth and Dyrrachium. Saint Andrew also worked among the Dacians and Scythians, and had preached in Dobruja and Pontus Euxinus. In 46 AD, this territory was conquered by the Romans and annexed to Moesia. In 106 AD the emperor Trajan invaded Dacia. Subsequently Christian colonists, soldiers and slaves came to Dacia and spread Christianity. In the Third Century the number of Christians grew. When Emperor Constantine of Rome issued the Edict of Milan in 313, thus ending all Roman-sponsored persecution of Christianity, the area became a haven for Christians. Just twelve years later in 325, Constantine assembled the First Council of Nicaea. In 391, Theodosius I made Christianity the official religion of Rome.
The East-West Schism, known also as the Great Schism (though this latter term sometimes refers to the later Western Schism), was the event that divided Christianity into Western Catholicism and Greek Eastern Orthodoxy, following the dividing line of the Empire in Western Latin-speaking and Eastern Greek-speaking parts. Though normally dated to 1054, when Pope Leo IX and Patriarch of Constantinople Michael I Cerularius excommunicated each other, the East-West Schism was actually the result of an extended period of estrangement between the two Churches. The primary claimed causes of the Schism were disputes over papal authority—the Pope claimed he held authority over the four Eastern patriarchs, while the patriarchs claimed that the Pope was merely a first among equals—and over the insertion of the filioque clause into the Nicene Creed. Most serious (and real) cause of course, was the competition for power between the old and the new capitals of the Roman Empire (Rome and Constantinople). There were other, less significant catalysts for the Schism, including variance over liturgical practices and conflicting claims of jurisdiction.
Byzantine Empire is the historiographical term used to describe the Greek-speaking Eastern Roman Empire during the Middle Ages, centered at its capital in Constantinople. During most of its history the Eastern Roman Empire controlled many provinces in the Balkans and in Asia Minor. The Eastern Roman Emperor Justinian for a time retook and restored much of the territory once held by the unified Roman Empire, from Spain and Italy, to Anatolia. Unlike the Western Roman Empire, which met a famous if rather ill-defined death in the year 476 AD, the Eastern Roman Empire came to a much less famous but far more definitive conclusion at the hands of Mehmet II and the Ottoman Empire in the year 1453. Its expert military and diplomatic power ensured inadvertently that Western Europe remained safe from many of the more devastating invasions from eastern peoples, at a time when the still new and fragile Western Christian kingdoms might have had difficulty containing it (this role was mirrored in the north by the Russian states of Kiev, Vladimir-Suzdal and Novgorod).
The magnitude of influence and contribution the Byzantine Empire made to Europe and Christendom has only begun to be recognised recently. The Emperor Justinian I's formation of a new code of law, the Corpus Juris Civilis, served as a basis of subsequent development of legal codes. Byzantium played an important role in the transmission of classical knowledge to the Islamic world and to Renaissance Italy. Its rich historiographical tradition preserved ancient knowledge upon which splendid art, architecture, literature and technological achievements were built. This is embodied in the Byzantine version of Christianity, which spread Orthodoxy and eventually led to the creation of the so-called "Byzantine commonwealth" (a term coined by 20th-century historians) throughout Eastern Europe. Early Byzantine missionary work spread Orthodox Christianity to various Slavic peoples, amongst whom it still is a predominant religion.
Throughout its history, its borders were ever fluctuating, often involved in multi-sided conflicts with not only the Arabs, Persians and Turks of the east, but also with its Christian neighbours- the Bulgarians, Serbs, Normans and the Crusaders, which all at one time or another conquered large amounts of its territory. By the end, the empire consisted of nothing but Constantinople and small holdings in mainland Greece, with all other territories in both the Balkans and Asia Minor gone. The conclusion was reached in 1453, when the city was successfully besieged by Mehmet II, bringing the Second Rome to an end.
Coinciding with the decline of the Roman Empire, many ‘barbarian’ tribes passed through the Balkans, most of whom did not leave any lasting state. During these ‘Dark Ages’ eastern Europe -like western Europe- regressed culturally and economically, although enclaves of prosperity and culture persisted along the coastal towns of the Adriatic and the major Greek cities in the south. As the Byzantine Empire withdrew its borders more and more- in an attempt to consolidate its fledgling power- vast areas were de-urbanised, roads abandoned and native populations may have withdrawn to isolated areas such as mountains and forests.
The first such tribe to enter the Balkans were the Goths. From northern East Germany, they migrated up the Vistula and settled in Scythia (modern Ukraine and Romania) in the 3rd century AD. Population pressures and the threat of the Huns led to their push further into the Balkans, into the Roman Empire. They were eventually granted lands inside the Byzantine realm (south of the Danube), as foederati. However, after a period of famine, a large contingent, led predominantly by (what would become) the Visigoths, rebelled against the Byzantines and defeated Emperor Valens at the famous Battle of Adrianople in 378. They subsequently sacked Rome in 410. In an attempt to deal with them, the succeeding emperor granted them rule of the Aquitaine region, in modern day France, where they founded the Visigothic kingdom. In the mean time, the Ostrogoths freed themselves from Hunnish domination in the battle of Nadeo in 454 AD. Theodoric the Great, the Ostrogothic King, was commissioned by Byzantine Emperor Zeno to conquer Italy from Odoacer of the foederati. They did this in 486, establishing the Ostrogothic kingdom of Italy (which included Dalmatia). Thus Zeno achieved two goals with one action, he removed the Ostrogoths from his border, and extinguished the ruled of the troublesome Italian Foederati. The Ostrogoths established a kingdom in Italy which included the north-western Balkans, before it was defeated by the Byzantines.
From their new base in the Caucasus, the Huns then moved further west into Europe, entering Pannonia in 400-410 AD. They were a confederation of different ethnicities: a Mongol ruling core, as well as Turkic and Uralic elements, and later incorporated various German (Goths, Gepids), Sarmatian (incl [Alans]) and Slavic tribes. They are supposed to have triggered the great German migrations into western Europe. From their base, they subdued many people and carved out a sphere of terror extending from Germany and the Baltic to the Black Sea. With the death of Attila in 454 AD, succession struggles led to the rapid collapse of Hun prestige. At the battle of Nadeo, the Huns’ subjects, led by Gepid King Ardaric, defeated Attila's would-be successors. The Huns disappeared from Europe as an entity, but their legend has lived on.
Other Germanic peoples that settled briefly in the Balkans were the Gepids and Lombards. The Gepids entered Dacia in the 3rd century, living alongside the Goths. After winning their independence from the Huns, they settled in Dacia and a province near modern day Belgrade, establishing a short-lived kingdom. When the Lombards entered Pannonia in 550s AD, they defeated the Gepids and absorbed them. In 569 AD, they moved into northern Italy, establishing their own Kingdom at the expense of the Ostrogoths.
The Slavs migrated in successive waves. Small numbers might have moved down as early as the 3rd century however the bulk of migration did not occur until the late 500s AD. They occupied most of the Eastern Roman Empire, pushing deep into Greece. Most still remained subjects of the Roman Empire, but those that settled in the Pannonian plain were tributary to the Avars.
Most historians and archeologists support the theory that the Slavic homeland originated in areas spanning modern-day southern Poland and Elbe valley in Germany. Since antiquity, the Balkans were already occupied by Illyrian tribes in the west and Thracian tribes in the east, many of which were Latinised (especially along the Dalmatian coast) and/or Hellenised (in the south). Their numbers were greatly decreased by the previous barbarian incursions. Many fled to mountainous areas or to the refuges of the cities on the Dalmatian coast. When the Slavs arrived, they were the first barbarian tribes to actually settle in the area permanently. They assimilated many of the native Balkan people. However some retained their own cultures and language: scholars theorise that the Morlach/Vlach mountain tribes and Albanians are descended from such people. The Latinised Illyrians of the Dalmatian coast also remained distinct from the Slavs of the hinterland for quite some time, but they too eventually assimilated with the main population.
The Avars were probably a Turkic group, possibly with ruling core derived from Rouran which escaped China. They entered Pannonia in the 600s AD, forcing the Lombards to flee to Italy. They continuously raided the Balkans, contributing to the general decline of the area which commenced centuries earlier. After their unsuccessful siege on Constantinople in 626, they limited themselves to Pannonia. They ruled over the Pannonian Slavs that had already inhabited the region. By the 900s, the Avar confederacy collapsed due to internal conflicts, Frankish and Slavic attacks. The remnant Avars were subsequently absorbed by the Slavs and Magyars.
The Bulgars (also Bolgars or proto-Bulgarians), a people of Central Asia, probably originally Pamirian. The major Bulgar wave commenced with the arrival of Asparuh's Bulgars. Asparuh was one of Kubrat's, the Great Khan, successors. They had occupied the fertile plains of the Ukraine for several centuries until the Khazars swept their confederation in the 660s and triggered their further migration. One part of them — under the leadership of Asparuh — headed southwest and settled in the 670s in present-day Bessarabia. In 680 AD they invaded Moesia and Dobrudja and formed a confederation with the local Slavic tribes who had migrated there a century earlier. After suffering a defeat at the hands of Bulgars and Slavs, the Byzantine Empire recognised the sovereignty of Asparuh's Khanate in a subsequent treaty signed in 681 AD. The same year is usually regarded as the year of the establishment of Bulgaria (see History of Bulgaria). A smaller group of Bulgars under Khan Kouber settled almost simultaneously in the Pelagonian plain in western Macedonia after spending some time in Panonia. Some Bulgars actually entered Europe earlier with the Huns. After the disintegration of the [Hunnic Empire|Hunnish Empire] the Bulgars dispersed mostly to eastern Europe.
The Magyars, led by Árpád, were the leading clan in a ten tribe confederacy. They entered Europe in the 900s AD, settling in Pannonia. There they encountered a predominantly Slavic populace and Avar remnants. The Magyars were a Uralic people, originating from west of the Ural Mountains. They learned the art of horseback warfare from Turkic people. They then migrated further west around 400AD, settling in the Don-Dnieper area. Here they were subjects of the Khazar Khaganate. They were neighboured by the Bulgars and Alans. They sided with 3 rebel Khazar tribes against the ruling factions. Their loss in this civil war, and ongoing battles with the Pechenegs, was probably the catalyst for them to move further west into Europe.
Even after the newcomers (i.e. Slavs, Magyars and Bulgars) to the Balkans established Kingdoms and Principalities recognised by the European theatre, invasions continued into Europe. Between the years 1000 to 1300 AD, nomadic Turkic peoples from the east entered the fringes of the Balkans. These included the Cumans and Pechenegs. Often allied with Byzantium (hired as mercenaries against the Rus at one time, Bulgars at another), they just as easily would break alliance and attack Byzantium. The situation was similar with their dealings with the Rus to the north. These steppe peoples ceased to exist as a formidable body after the Mongol invasion in the 12th century. Some of the westernmost regions of the Steppe land, i.e. the Moldavia region etc, escaped outright Mongol dominion. Here the people were largely assimilated by the Bulgarian, Hungarian and Romanian populace, adding to the ethnic milieu that is the Balkans.
The maximum extent of the Roman Empire in southeastern Europe occurred after 106 AD when conquest of the Dacians extended the empire from modern Greece to Romania. By all accounts, the Latin-speaking people of the Roman Empire represented both a variety of indigenous people as well as colonists who came into the region. Under barbarian pressure, the Roman Legions retreated from Dacia (modern Romania) in 271-275. According to Romanian historians, Roman colonists and the Latinized Dacians retreated into the Carpathian Mountains of Transylvania after the Roman Legions withdrew from the area. This view is supported to the extent that archeological evidence does indicate the presence of a Romanised population in Transylvania by at least the 8th Century.
By the late 4th century the Roman Empire was plagued by internal problems and by the incursions of various barbarian tribes. By the 7th and 8th Centuries, the Roman Empire existed only south of the Danube River in the form of the Byzantine Empire, with its capital at Constantinople. In this ethnically diverse closing area of the Roman Empire, Vlachs were recognized as those who spoke Latin, the official language of the Byzantine Empire used only in official documents, until the 6th Century when it was changed to the more popular Greek. These original Vlachs probably consisted of a variety of ethnic groups (most notably Thracians, Greeks) who shared the commonality of having been assimilated in language and culture of the Eastern Roman, later Byzantine Empire.
In 886 AD, Bulgaria adopted the Glagolitic alphabet which was devised by the Byzantine missionaries Saint Cyril and Methodius in the 850s. The Glagolitic alphabet was gradually superseded in later centuries by the Cyrillic alphabet, developed around the Preslav Literary School in the beginning of the 10th century. Most letters in the Cyrillic alphabet were borrowed from the Greek alphabet, but those which had no Greek equivalents represent simplified Glagolitic letters.
The first mention of the slavic dialects that would later constitute the Bulgarian language as the "Bulgarian language" instead of the "Slavonic language" comes in the work of the Greek clergy of the Bulgarian Archbishopric of Ohrid in the 11th century, for example in the Greek hagiography of Saint Clement of Ohrid by Theophylact of Ohrid (late 11th century).
In 893 the vernacular of the Bulgarian Slavs was adopted as the official language of the Bulgarian state and church. The following years saw the military victories of Simeon the Great against the Byzantines which resulted in an additional territorial expansion and the recognition of the autocephaly of the Bulgarian Orthodox Church and of the title of Tsar for Simeon's successor, Peter I. Very soon the state got weakened, however, in the middle of the 9th century as a result of barbaric raids from the north and the Bogomil heresy. After an assault by the Rus' in 969, eastern Bulgaria and the capital of Preslav became subdued by Byzantine Emperor John Tzimisces in 972. The Bulgarians managed to maintain an independent state in the west for some time due to the efforts of Samuil who even managed to recover eastern Bulgaria and conquer Serbia in the 990s. A final defeat at Kleidion in 1014, however, precipitated the fall of the whole of Bulgaria under Byzantine rule in 1018. The Bulgarian state was restored by a revolt of the Asenides in Moesia in 1185. Thrace and Macedonia were conquered by Kaloyan and Ivan Asen II and throughout the first half of the 13th century Bulgaria was again one of the powerful states in Southeastern Europe, taking advantage of the disastrous effects that the fourth crusade had over the Byzantine Empire. The Tatar raids and the series of mediocre rulers after Ivan Asen II, however, reduced Bulgaria to a narrow strip of land between the Balkan mountains and the Danube at the end of the 13th century. The royal dynasties of Terter and Shishman managed to restore some of the former might of the Bulgarians in the first half of the 14th century. The raids of the Ottoman Turks since the 1350s cut, however, short the Bulgarian territorial expansion; by 1396 the whole of Bulgaria was overrun by the Ottomans.
Rascia and Doclea were the two most dominant Serb states. Apart from occasional brief unifications, the states were mostly independent. There were constant power struggles between the various princes. This disunity halted any consolidation of power and often resulted in interference from foreign rulers (Byzantine Greece, Venice, Hungary, Bulgaria, even the Normans). Despite this fact, the cultural achievements that arose from these states were very significant, and forged a proud Serbian national identity.
The Serbs (all Serb tribes) were Christianised after their arrival on the Balkans by Byzantine Greek missionaries, but not all Serb tribes took on the new faith, however by 840s the Serbs were predominantly Christian, finalized by the missions of Saints Cyril and Methodius. After the Great Schism of 1054, eastern areas were influenced by Greek Orthodox church, whereas the Adriatic areas were Latin rite.
Early on all states recognised Byzantine suzerainty, although in practice Byzantine rule was limited to the coastal areas. In 925 AD, the Serb lands were invaded by Tsar Simeon of the Bulgarian Kingdom. In 927 Caslav Klonimirovic unified Raska with Doclea, Zachlumje, Travunia and Pagania. The Bosnian bans (chiefs) also joined the confederacy. They ousted the hostile Bulgarians and re-established Serbian independence. The death of Caslav in 960 brought the end of the House of Vlastimirovic, as well as Serbian unity. Bosnia also withdrew from the union, and was forced into vassalage by Croatia. The Byzantines easily re-asserted their authority over the Serbian lands, and ruled the area for almost 100 years.
The decline of Raska's power saw the rise of Doclea as the centre of Serbian rule and culture. A Travunian noble family won the succession struggles, creating a personal union between the states of Doclea, Travunia and Zahumlje. The first such prince was Predimir.
Serbia continued to expand, winning new territory to the north; including the city of Belgrade, Srem region and northern Bosnia. Medieval Serbia enjoyed a high political, economic, and cultural reputation in Europe. It reached its apex in the mid-14th century, during the rule of Tzar Stefan Dušan, conquering Macedonia and most of Greece. He crowned himself Emperor of Serbs, Greeks and Tribals in 1346 in Skopje. During Dusan's campaigns, the Ottomans raided Europe for the first time, being used as mercenaries by the ousted Byzantine Emperor (he would soon realise that they would not leave after their tasks were complete). Dusan's aim was to capture Constantinople and abolish the defunct Byzantine Empire, and create a new unified Orthodox Empire centred on Serbia. However, he died in his own lands before he could begin his march. After his death his successor Uros the Weak lost central authority, and died childless in 1371.
Power was divided between local despots. During the Battle of Maritza in 1371 (where a 70, 000 coalition of Serbs and Bulgarians lost to the Ottomans), the majority of Serbia's nobility were killed. Despot Lazar continued to rule over Serbia, as he did not participate in the battle. In the Battle of Kosovo (1389), Lazar led a final coalition of some 15 to 30, 000 troops which included Bosnian, Croatian and even Romanian contingents. Whether the battle was a victory, draw or loss, it left Serbia incapable of raising any further armies. Eventually all of Serbia fell to Turkey by 1459.
Zeta continued to be ruled by the Basilic and then the Crnojevic families until loss of rule in the 1500s. Part of the land was incorporated into Ottoman rule, as the Sanjak of Montenegro. Part proudly remained independent as a new theocratic state ruled by the Vladikas (Prince-Bishops).
The Franks controlled the Pannonian duchy (which served as a Carolingian Mark). They recognised Byzantine authority over the Adriatic coast, while the Franks kept the adjacent littoral and Istria. Despite a short-lived rebellion by Duke Ljudevit Posavski, the Franks re-asserted their authority in the north. In 829, the Bulgarians conquered the eastern parts of Pannonian Croatia and placed a local called Ratimir as Duke. The Frankish lord Ratbod recaptured most of the area in 838, although the eastern-most part (Syrmia) was kept by Bulgaria. The last known Pannonian Duke under Frankish fielty was Braslav.
Meanhile the Dalmatian Croats were struggling to establish their own rule over the coastal area, leading them into conflicts with Venice and Byzantium. Duke Mislav built up a vast navy and had supported the Slavic Pirates from Pagania in their disruption of Venetian trade. A Venetian expedition aimed at pacifying and subduing them was largely unsuccessful. They also came into conflict with Boris I of Bulgaria as he tried to expand Bulgaria's kingdom westward. His successor Trpimir succeeded in expelling the Bulgarians from Croatian lands, and consolidated his power in Dalmatia and moved inland to Pannonia and north-east Bosnia. Duke Muncimir managed to secure recognition of the Duchy as independent from Roman and Byzantine rule. He was succeeded by Tomislav in 910, who united the Croatian duchies to form the Kingdom of Croatia.
The founding of the Croatian Kingdom occurred sometime between 923 and 928, covering Dalmatia (including Pagania and Zahumlje at times), the majority of Bosnia (at the Kingdom's zenith) and Pannonia (which includes Slavonia). One of the successor Kings, Miroslav, was assassinated by one of his nobles. The ensuing power struggle destabilised the kingdom. This allowed the Paganian Dukes to claim independence from Croatia, the Dalmatian city-states were retaken by the Byzantines, and Slavonia and Srijem fell to the Magyars (although later lower Srijem was taken by Stefan Dragutin from Raska, and subsequently continued to be contested between Serbia and Hungary).
The Kingdom recovered much of its lands under Kresimir IV. During this time, he allowed the Vatican to influence Croatia more and more, in exchange for Papal recognition of the Croatian Kingdom. Despite being a Latin rite Christian state, for a time Croatia's religious practice showed many features of Orthodoxy: the priests wore beards, married women and preached in Slavic liturgy. This changed after the Synod of Split decreed Latin as the official liturgy language, and pro-Latin priests became dominant, although pockets of Slavic liturgy churches remained till the 16th century.
Kresimir was succeeded by his relative Zvonomir. After his death in 1091, Hungarian King Ladislaw I claimed the throne, as his sister Jelena was Zvonomir's widow. The Croatian dukes managed to maintain independence until King Kalman (Ladislaus’ successor) invaded Croatia. Rome recognised his sovereignty. Although his take-over was not complete, the nobles accepted union with Hungary after the death of Petar Svacic (the last Croatian king) in battle. This was supposedly decreed by the Pacta Conventa in 1102. Croatia was still considered a separate, albeit a vassal, kingdom.
The Dalmatian coast was always sought after, for its wealthy Latinised cities were centres of trade, culture and academia; and its coast provided access to important trade routes. Gradually, Byzantine influence -which was nominal at best- over the Latin cities of the coast faded away, being supplanted by that of Venice by 1000s AD. The Normans briefly held a few cities on the coast, and Hungary was often in conflict with Venice over Dalmatia. Ultimately, Venice remained as ruler of the Dalmatian coastal cities, even withstanding the Ottoman invasions. The southern city of Dubrovnik (Ragusa) managed to remain as an independent City-State - the Republic of Ragusa.
Union with Hungary brought Feudalism to Croatia's populace. Croatian provinces were ruled by local bans, appointed by the Hungary. The territory was split into two banates- that of Croatia (including Dalmatia and central Croatia) and Slavonia. Although some bans, such as the Subic family would attempt to assert their own control, Hungary would easily regain rule.
With the Ottoman conquest of the Balkans, Croatia fell after successive battles. The Battle of Mohacs in 1526 ended Hungarian rule over Croatia, and most of Croatia was ruled by the Ottomans. The remaining part then received Austrian rule and protection. Croatia thus became a frontier of Christendom. The border areas became known as the Vojna Krajina (military frontier); and many Serbs, Vlachs, Croats and Germans inhabited this area that had previously become deserted. They served as a military guard, and in turn received much autonomy from the Hapsburgs.
Bosnia was initially part of the unity of the Kingdom of Croatia. However, the kingdom began to fall apart as waring with Hungary began as well as internal power struggles and venician intervention. Bosnia also fell temporarily under Bulgarian rule, the Byzantines temporarily established their authority in 1019. It then briefly fell under Croatian influence again in the 1060s, under Kresimir IV. Constantin Bodin from Doclea then conquered it and emplaced his own vassal to rule Bosnia. After his death in 1101, Bosnia's bans tried to rule for themselves. However, they would all too often find themselves in a tug-of-war between Hungary and the Byzantine Empire.
The first recorded Ban (viceroy) was Ban Boric, vassal to the Hungarian king. However, he was deposed when he backed the loser in a succession crisis over the Hungarian throne. In 1166, Byzantium reconquered Bosnia and emplaced their own vassal as Ban – Kulin. He was a successful ruler. He propagated economic growth in Bosnia by signing trade treaties with the city of Ragusa. Secondly, after turning his back on the Byzantines, he allied himself with Hungary and his relative Stefan Nemanja of Serbia to drive the Byzantines out of the land, securing Bosnian independence from Byzantium (but thus returning it under Hungarian influence). He supported the Bosnian Church, a Christian offshoot labeled as heretical by both Orthodoxy and the Pope. Yet he swore to the Pope his devotion to Catholicism to avoid a religious ‘crusade’. After his death in 1204, he was succeeded by his son Stephan. Stephen was a staunch Catholic, and proved unpopular by the many Bosnian Church aligned nobles, who deposed him. They placed one Matej Ninoslav, a convert to the heretic sect, as Ban. However, he faced two foes simultaneously: Croat Herzog Coloman (backed by Hungary and the Pope) and Stephen's son Count Sibislav. Miraculously he held out, as Hungary had to pull out after being invaded by the Tartars. After he died, Hungary placed his cousin Prijezda on the throne. He was a Catholic that converted to Bogomilism, and then converted back to Catholicism. To prove his fidelity, he energetically persecuted the heretics.
After his death, Stephen I Kotroman became Ban. However, he lost rule of Bosnia to Croatia's Subicic clan, who were given support by Angevin pretender to the Hungarian throne as a reward for backing him in his succession claim. However, Subicic rule was unpopular amongst the Bosnian people, thus they asked Stephen II Kotroman (son of Stephen I) to rule as their vassal. He aptly played Hungary and Venice against each other (regarding a conflict over the city of Zardar), becoming more and more independent.
By this time, the Bosnian state had already begun expanding, gaining lands north from Hungary, and seizing Zahumlje from a rebellious noble family (which had seized it from the Nemanjic rulers of Serbia. He then refused to return it to Serbia's king).
After his death in 1353, he was succeeded by his nephew Tvrtko. Although deposed after conflict with other nobles and troubled by his usurping brother, the Bosnian realm reached its zenith under his rule, gaining more lands to the north and south, including parts of Croatia and Dalmatia (including Travunia). The name Herzegovina was adopted for the newly won territories along the southern Dalmatian coast and adjacent littoral.
With the decline of Serbia, and the end of the Nemanjic dynasty, Tvrtko crowned himself on 26 October 1377 as Stefan Tvrtko I by the mercy of God King of Serbs, Bosnia and the Seaside and the Western Lands. He sent troops to fight alongside the remaining Serbian nobles, such as Lazar, in the Battle of Kosovo in 1389. After his death, Bosnia's regional power declined, and was soon just another state to fall to the Turkish war machine.
Bosnia was centred between the Roman and Byzantine worlds. Consequently, neither Catholicism nor Eastern Orthodoxy was dominant. In fact, it had its own 'Bosnian Church' which was similar to both Catholicism and Orthodoxy, whilst incorporating local superstitious beliefs. It was branded as heretical by both Rome and Constantinople, and accused of being linked to the Bogomil sect. Much of the populace belonged to the local Bosnian church, yet its influence was not deeply rooted. Although Catholic at face value, the ruling Bans mostly tolerated, and some converted to the Bosnian church. The Pope, with the aid of Catholic Hungary, was often infuriated by the poor attempts of the Bans to quell the heretical sect, and sought to incite a religious Crusade on Bosnia. Ultimately, it was the lack of a strong and unified religious orientation that enabled Islam to take hold in such high numbers in Bosnia, whereas other Turk dominions held onto their Catholic or Orthodox faiths. With the Ottoman take-over, the Bosnian church ceased to exist, as its followers converted to Islam. The Bosnians that were Orthodox and Catholic remained so, but they were joined by a new religion – Islam. The 'ethnic' tensions that arose in modern times stem from this religious division.
For a long time, Romanian lands were not consolidated provinces but mere collections consisting of a few villages each. At this time, the Cumans settled in northeastern areas of Romania, and over time assimilated with the Romanians (Vlachs). At the same time, the Magyars settled in the Carpathian basin, west of the Carpathian Mountains, and eventually consolidated into the Hungarian kingdom which included Transylvania (the part of Romania which lies west of the Carpathian divide). A revived, second Bulgarian Empire arose in 1115, with the help of Vlach fighters. This new kingdom extended some influence over the southern Romanian lands, however it was limited by the strength of the Hungarian Kingdom, the rise of independent Wallachian principality, and its own downfall in the 1240s.
The principality of Walachia emerged as a unified, independent province in 1330, when Basarab I defeated his liege Hungarian Charles I of Anjou. Moldavia is said to have been founded by Dragos, Knyaz of Maramures. He was sent by the Hungarian king to the area to effectively establish a buffer zone to protect Hungary from the tartar raids of 1240s. In 1359, after falling out with the Hungarian King, another Vlach voivode from Maramures crossed the Carpathians and took Moldavia for himself and removed Hungarian control. Wallachia and Moldavia steadily gained strength in the 14th century, a peaceful and prosperous time throughout southeastern Europe. The Eastern Orthodox patriarch in Constantinople established an ecclesiastical seat in Wallachia and appointed a metropolitan. The church's recognition confirmed Wallachia's status as a principality, and Wallachia freed itself from Angevin suzerainty in 1380. However, they were still heavily influenced by Hungary, as well as the Polish Kingdom.
Transylvania was not part of Hungary from the start. During the existence of the Transsylvanian principalit, the Hungarian nobles, Szekely and Saxon Germans had any privilege. Some Romanian lesser- nobles converted to Catholicism in an attempt to integrate into the Hungarian nobility.
In the 15th century, the Romanian principalities became tributary subjects to the Turks, though they were never outright conquered. In 1475, Stephen III ("the Great") of Moldavia scored a decisive victory against the Ottoman Empire at the Battle of Vaslui. With the fall of Hungary, Transylvania became a semi-independent territory vassal to the Turks.
The Ottomans were one of the most powerful and influential civilizations of the post-medieval period. Created by Turkic tribes in Anatolia, the people of those tribes were used as mercenaries since the 10th century by the Byzantine Empire.
The Ottoman Empire (1299 to 1923) persisted until the 20th century and did not end until after World War I when Turkey adopted a more European style secular government (under Kemal Atatürk). Ottoman rule over the Balkans was characterized by centuries of bloody struggle for freedom and protracted periods of stalemate with the Habsburgs along the border areas of Hungary, Croatia and Serbia. Anti-Turkish propaganda and outrage against the Islamic oppressors was at its peak in the early 20th century. Millions of Balkan people were slain, or forcingly Islamised by the Ottomans.
The rise of Nationalism under the declining Ottoman Empire caused the break-down of millet concept. With the rise of national states and their histories, it is very hard to find reliable sources on the Ottoman concept of a nation and the centuries of the relations between House of Ottoman and provinces, which turned into states. Unquestionably, understanding of Ottomans concept of nation helps us to understand what happened during the decline period of the Ottoman Empire.
The Serbs were the first people to be liberated from the Ottomans, although the liberated part was mostly a by-product of the Austrian infiltration to the region. In 1821, the Greeks were the first to defy the Sultan's authority. After a long, bloody struggle, that originated in Moldavia, as a diversion, and followed by the main revolution in the Pelloponese, the latter, along with the Northern part of the gulf of Corinth became the first parts of the Ottoman empire to be completely liberated from the Ottoman oppression in 1829. Serbia, Bulgaria, Romania and Montenegro followed in the 1870s.
Many members of the Austro-Hungarian government, such as Conrad von Hötzendorf had hoped to provoke a war with Serbia for several years. They had a couple of motives. In part they feared the power of Serbia and its ability to sow dissent and disruption in the empire's "south-Slav" provinces under the banner of a "greater Slav state." Another hope was that they could annex Serbian territories in order to change the ethnic composition of the empire. With more slavs in the Empire, some in the German dominated half of the government, hoped to balance the power of the Magyar dominated Hungarian government. Until 1914 more peaceful elements had been able to argue against these military strategies, either through strategic considerations or political ones. However, Franz Ferdinand, a leading advocate of a peaceful solution had been removed from the scene, and more hawkish elements were able to prevail. Another factor in this were developments in Germany which gave the Dual-Monarchy a "blank cheque" to pursue a military strategy assured of Germany's backing.
Austro-Hungarian planning for operations against Serbia was not extensive and they ran into many logistical difficulties in mobilizing the army and beginning operations against the Serbs. They encountered problems with train schedules and mobilization schedules which conflicted with agricultural cycles in some areas. When operations began in early August Austria-Hungary was unable to crush the Serbian armies as many within the monarchy had predicted. One difficulty for the Austro-Hungarians was that they had to divert many divisions north to counter advancing Russian armies. Planning for operations against Serbia had not accounted for possible Russian intervention, which the Austro-Hungarian army had assumed would be countered by Germany. However, the German army had long planned on attacking France before turning to Russia given a war with the Entente powers. (See: Schlieffen Plan) Poor communication between the two governments led to this catastrophic oversight.
As a result Austria-Hungary's war effort was damaged almost beyond redemption within a couple of months of the war beginning. The Serb army, which was coming up from the south of the country, met the Austrian army at the Battle of Cer beginning on August 12, 1914.
The Serbians were set up in defensive positions against the Austro-Hungarians. The first attack came on August 16, between parts of the 21st Austro-Hungarian division and parts of the Serbian Combined division. In harsh night-time fighting, the battle ebbed and flowed, until the Serbian line was rallied under the leadership of Stepa Stepanovic. Three days later the Austrians retreated across the Danube, having suffered 21,000 casualties against 16,000 Serbian casualties. This marked the first Allied victory of the war. The Austrians had not achieved their main goal of eliminating Serbia. In the next couple of months the two armies fought large battles at Drina (September 6 to November 11) and at Kolubara from November 16 to December 15.
In the autumn, with many Austro-Hungarians tied up in heavy fighting with Serbia, Russia was able to make huge inroads into Austria-Hungary capturing Galicia and destroying much of the Empire's fighting ability. It wasn't until October 1915 with a lot of German, Bulgarian, and Turkish assistance that Serbia was finally occupied, although the weakened Serbian army retreated to Corfu with Italian assistance and continued to fight against the central powers.
The Serbian Army also penetrated the three Croatian historic lands of Croatia, Dalmatia and Slavonia, multiethnic Bosnia etc. The Serbian prime minister announced that Serbia would fight for the unification of all slavs in a single state. From this plan, a new kingdom would eventually be born: The Kingdom of Serbs, Croats and Slovenians.
Montenegro declared war on 6 August 1914. Bulgaria, however, stood aside before eventually joining the Central Powers in 1915, and Romania joined the Allies in 1916. In 1916 the Allies sent their ill-fated expedition to Gallipoli in the Dardanelles, and in the autumn of 1916 they established themselves in Salonika, establishing front. However, their armies did not move from front until near end of the war, when they marched up north to free territories under rule of Central Powers.
The war had enormous repercussions for the Balkan peninsula. People across the area suffered serious economic dislocation, and the mass mobilization resulted in severe casualties, particularly in Serbia. In less-developed areas World War I was felt in different ways: requisitioning of draft animals, for example, caused severe problems in villages that were already suffering from the enlistment of young men, and many recently created trade connections were ruined.
The borders of many states were completely redrawn, and the new Kingdom of Serbs, Croats, and Slovenes, later Yugoslavia, was created. Both Austria-Hungary and the Ottoman Empire were formally dissolved. As a result the balance of power, economic relations, and ethnic divisions were completely altered.
Some important territorial changes include:
Between World War I and World War II, in order to create nation-states the following population movements were seen:
World War 2 started from the Italian attempts to recreate a great Italian state. They invaded their puppet state in Albania in 1939 and then demanded Greece to surrender in 1940. However, the defiance of the Greek prime minister Metaxas in 28 October 1940, started the Greco-Italian war. After nine months of unsuccessful fighting from the Italian part, and after the first Allied victories from the Greek part, and after half of Albania was taken by Greek forces, Germany intervened to save her allies. So in 1941 they invaded Yugoslavia using the forces they would use against the Soviet Union. After the fall of Sarajevo on 16 April 1941 to Nazi Germany, the Yugoslav provinces of Croatia, Bosnia, Herzegovina and parts of Serbia were recreated as a pro-Nazi satellite state, Nezavisna Država Hrvatska (NDH, the Independent State of Croatia). Croat-nationalist, Ante Pavelić was appointed leader. The Nazis effectively created the Handschar division and collaborated with Ustaše and Chetniks in order to combat the Yugoslav Partisans. With help from the Yugoslav minorities and Hungaria, they succeeded in conquering Yugoslavia within a month. Then they joined forces with Bulgaria and invaded Greece from the Yugoslavian side. Despite Greek resistance, the Germans took advantage of the Greek army's presence in Albania to advance in Northern Greece and consequently conquer the entire country within a month, with the exception of Crete. However, even with the fierce Cretan resistance, which cost the Nazis the bulk of their elite paratrooper forces, the island capitulated after 11 days of fighting. The Balkan frontiers were once again reshuffled, with the creation of several puppet states, such as Croatia and Montenegro, the Albanian expance into Greece and Yugoslavia, Bulgarian annexation of territorries in the Greek North, creation of a Vlach state in the Greek mountains of Pindus and the annexation of all the Ionian and part of the Aegean islands into Italy. Due to severe resistance from the local Serb and Greek populations, and because of the attempts, made from Bulgarians, Croats and Albaniana to change the ethnic composition of the occupied territorries, several hundread thousands Serbs and Greeks died, however, with the end of the war, the changes reverted to their original conditions and the settlers retourned to their homelands, mainly the ones settled in Greece. An Albanian population of the Greek North, the Cams, were forced to flee their lands. Their numbers were about 18 000 in 1944.
In Albania, Bulgaria and Romania the changes in political and economic system were accompanied by a period of political and economic instability and tragic events. The same was the case in most of former Yugoslav republics, except for Slovenia.
The Yugoslav federation also collapsed in the early 1990s, followed by an outbreak of violence and aggression, in a series of conflicts known alternately as the Yugoslav War(s), the War in the Balkans, or rarely the Third Balkan War (a term coined by British journalist Misha Glenny). The disintegration of Yugoslavia was particularly the consequence of unresolved national, political and economic questions. The conflicts caused the death of many innocent people.
The collapse of Yugoslavia was due to various factors in various republics that composed it. In Serbia and Montenegro, there were efforts of different factions of the old party elite to retain power under new conditions along, and an attempt to create a Greater Serbia by keeping all Serbs in one state. In Croatia and Slovenia, multi-party elections produced nationally-inclined leadership that followed in the footsteps of their previous Communist predecessors and oriented itself towards capitalism and secession. Bosnia and Herzegovina was split between the conflicting interests of its Serbs, Croats, and Bosniaks, while the Former Yugoslav Republic of Macedonia mostly tried to steer away from conflicting situations.
The ten-days war in Slovenia in June 1991 was short and with few casualties. However, the war in Croatia in the latter half of 1991 brought many casualties and much damage. As the war eventually subsided in Croatia, the war in Bosnia and Herzegovina (BiH) started in early 1992. Peace would only come in 1995 after such events as the Srebrenica massacre, Operation Storm and the Dayton Agreement, which provided for a temporary solution, but nothing was permanently resolved.
The economy suffered an enormous damage in all of BiH and in the affected parts of Croatia. The Federal Republic of Yugoslavia also suffered an economic hardship under internationally-imposed economic sanctions. Also many large historical cities were devastated by the wars, for example Sarajevo, Dubrovnik, Zadar, Mostar, Šibenik and others.
The wars caused large migrations of population. With the exception of its former republics of Slovenia and Macedonia, the settlement and the national composition of population in all parts of Yugoslavia changed drastically, due to war, but also political pressure and threats.
Initial upsets on Kosovo did not escalate into a war until 1999 when the Federal Republic of Yugoslavia (Serbia and Montenegro) was bombarded by over 30 members of NATO for several months and Kosovo made a protectorate of international peacekeeping troops.
Since the Bosniaks had no immediate refuge, they were arguably hardest hit by the ethnic violence. The United Nations tried to create safe areas for the Bosniak populations of eastern Bosnia but in cases such as the Srebrenica massacre, the peacekeeping troops (Dutch forces) failed to protect the safe areas resulting in the massacre of thousands by the hands of Serb forces.
The war in Bosnia brought major ethnic cleansing of non-Serbs from the regions that today make up the Republika Srpska: throughout Bosanska Krajina (notably the significant minority population of Bosniaks and Croats in Banja Luka, slight majority of Bosniaks in Prijedor), Bosnian Posavina (Croats as well as Bosniaks, from Brčko, Bosanski Brod, Doboj, Odžak, Derventa), eastern Bosnia (Bosniak majority population of Foča, Zvornik, Višegrad, Srebrenica, Žepa), eastern Herzegovina (Trebinje). During the Bosniak-Croat conflict, Bosniaks were ethnically cleansed by Croats and sometimes vice-versa in areas of Central Bosnia, central and eastern Herzegovina (Mostar and Stolac). The war in Croatia started in 1991, and was caused by the rebellion of Serbian population in Croatia, their wish to secede, hoping to form a Greater Serbia, and along with other Serb-occupied territories in Croatia and Bosnia and Herzegovina unite with Serbia. During the war in Croatia, from 1991 to 1995 around 600,000 Serbs were not ethnically cleansed (as some ill-informed sources may claim) from southern and eastern parts of country, but have left prior to Croatian military operations instead. Preparations for "evacuation" of Serbs from the so-called "Republic of Serbian Krajina" were held out in late 1994, and were captured on videotape. Most of them fled in fear of Croatian retribution, prior the Croatian operations Flash and Storm in 1995. Isolated incidents, including rapes and murders of those who had chosen to stay have been reported, but UN, ICTY and international community didn't show any interest for that issue. Serbia is now home to more than 800.000 refuges from Croatia, Bosnia and Kosovo, most of them are Serbs, but there are Roma (who are, in most cases, settled in cardbox ghettos around Serbian cities (most famous is Gazela situated under the Gazela bridge in Belgrade downtown)), Gorani, pro-Serbian Albanians and Montenegrins as well.
The Dayton Accords nominally ended the current war in Bosnia and Herzegovina, fixating the borders between the two warring parties roughly to the ones established by the autumn of 1995. One immediate result of population transfers following the peace deal was a sharp decline in ethnic violence in the region. See Washington Post Balkan Report for a summary of the conflict, and FAS analysis of former Yugoslavia for population ethnic distribution maps.
A number of commanders and politicians, notably Serbia's former president Slobodan Milošević, were put on trial by the United Nations' International Criminal Tribunal for the Former Yugoslavia for a variety of war crimes, including deportations and genocide. Croatia's former president Franjo Tuđman and Bosnia's Alija Izetbegović died before any alleged accusations were levelled at them at the ICTY. Slobodan Milošević died before his trial could be concluded.
A massive and systematic deportation of ethnic Albanians took place during the Kosovo War of 1999, with around 800,000 Albanians (out of a population of about 1.8 million) forced to flee Kosovo. This was quickly reversed at the war's end, but thousands of Serbs were fled to Serbia. Unfortunately, the 20th century has been one of the most violent centuries in recorded history (Kegley & Whitkopf, 2004); not only has the globe been captivated by major media stations relaying stories of death and destruction, rather, we have also seen the brutality and asymmetrical attributes of ‘war’ that do not only encompass death, genocide, ethnic cleansing and combatant on combatant confrontations. The attributes of ‘war’, also encompasses the rapes of men, women and children - (mass-rapes included), the pillaging of towns, villages and homesteads with the aim of inflicting as much pain and trauma upon its unwilling participants as possible (Diken & Lausten, 2005). We were captivated by images of refugees streaming across regional borders looking for assistance from neighbouring countries (Judah, 2000). People that once had somewhere to live, a place to call home, were now internationally displaced, begging authorities for food, water, and basic healthcare (Judah, 2000). Furthermore, the civil and political ramifications of ethnic conflict, particularly violent, can also be linked to the successive stages of transnational organized crime (Carment & James, 1998:3). With the increased movements across borders by refugees seeking shelter and safety, we also see the increased exploitation of criminal gangs seeking to expand their business. For instance, within the refugee exodus, we may also see the blending of criminal elements trafficking in drugs, people smuggling and forced trafficking in human beings, weapons trafficking (conventional and potentially nuclear weapons), transportation of currencies and products (Carment & James, 1998; T. Nikolic, 2006).
Greece has been a member of the European Union since 1981 and of NATO since 1952. Greece is also a member of the Eurozone and the Western European Union. Slovenia and Cyprus are EU members since 2004, and Bulgaria and Romania joined the EU in 2007. Turkey initially applied in 1963 and as of late 2005 accession negotiations have begun, although analysts believe 2015 is the earliest date the country can join the union due to the plethora of economic and social reforms it has to complete. Croatia and Macedonia also received candidate status in 2005, while the other Balkan countries have expressed a desire to join the EU but at some date in the future.
On October 17, 2007 Croatia became a non-permanent member of the United Nations Security Council for the 2008-2009 term. Croatia is expecting NATO membership in 2008, and admission in the EU in 2009, along with Albania.
In 2006, Montenegro separated from the state of Serbia and Montenegro, also making Serbia a separate state. There were fears that this separation would lead to regional instability, but so far this has not been the case.
Kosovo declared its independence from Serbia on February 17, 2008. |
Page 1 of 2
Hexadecimal is the most common way of displaying the raw data sitting in a machine's memory, but if you are not familiar with it you might ask "What the hex..?"
What the hex?
Hexadecimal is the most common way of displaying the raw data sitting in a machine's memory or even stored on disk. You can be happily programming away in a high level language without a care in the world and then suddenly an serious error occurs and you are faced with a line showing you the address of the problem and the contents of the processors registers etc. all in glorious hex.
Even if you are not programming, the most usual format to dump a file in is as lines and lines of hex. Back in the dark old days of assembly language programming you had to be familiar with hex as well as with binary and occasionally octal.
If you haven't mucked about with assembler or machine architecture, or if you fell asleep in the first term of the computer course, then you might think hex is just a programmer's curse. To make sure that you know better this is a short, and highly practical guide, to hexadecimal addressing and data. If you are no good at math don't panic because it is very simple and after a few minutes practice it becomes almost second nature.
If only we had all been born with 16 fingers!
Not only would typing have been a faster activity but we might have counted in hex naturally.
Counting is a matter of using symbols to represent each number. For example, 0, 1, 2, 3 and so on. Using this simple system the problem is that you quickly run out of symbols. You need a symbol for each possible quantity of things.
The solution is to use a place value counting method.
In decimal we are all very happy with this method - you count up to 9 and then start again after recording the fact that you have got to 10 once by writing a 1 to the left.
In school we are taught that each place to the left represents 10 times more than the previous digit location. That is, the first value represents units, the second lots of 10, the third lots of 100 and so on. This means that a number like 123 is really:
or one lot of 100, two lots of 10 and three lots of 1.
It is generally supposed, and I can think of no better explanation, that we count in lots of 10 because that's how many fingers we have.
All of this is so easy that we tend to use it intuitively and without being able to explain what it going on. This makes the shock of changing to a different counting base even more traumatic.
Hexadecimal uses 16 as the base - Hexa=6 and decimal=10.
To put this another way the hex counting system uses `lots of 16' in the same way that the decimal system uses `lots of 10'.
The first problem to be solved is how to count up to the first lot of 16 as there are only ten digits - 0 to 9. The solution is that we use the letters A, B, C, D, E and F to supplement the meager inheritance that having only ten fingers has bestowed upon us.
This means that counting in hex up to 15 goes
The only tricky bit being to remember not to say `ten' after 9.
Now what happens after counting to F?
This is, the same question as what happens after counting to 9 in decimal.
The answer is that you write a 1 to the left to indicate that you have counted one lot of 16 and then carry on counting. That is after F comes 10 which isn't ten (decimal) and shouldn't really be said as ten but `one nought' or `one zero'.
The next question is what comes after 10 in hex?
The answer is 11, then 12 and so on all the way up to 1F.
At this point we have another lot of 16 and we start counting again at 20 and so on. The only danger point happens when you reach 9F and the temptation is to accidentally make the next value 100. It isn't because in hex A comes after 9 and so the next value is A0 and so on.
You only reach 100 in hex after counting all of the way to FF.
Fun isn't it?
Once you have learned to count to 100 hex there isn't anything more to see. You just keep counting up to F and adding one to the place to the left until it reaches F and so on.
Many programming languages use the convention, first used in C, that a hex number is indicated by starting with 0x. So if you have counted to FF you can write this as 0xFF.
Remember: the hex place value system works with lots of 16 and not lots of 10.
Being able to count in hex is a great way to understand the how the system works but it isn't much use when you are confronted with a message such as
To really feel at home with hex you have to be able to understand it in a slightly different way. You certainly have to be able to convert hex to decimal and vice versa but there is something deeper.
First, though, how do you convert hex to decimal?
There are a number of standard algorithms that can be used to convert between different number bases but I have to admit that I prefer a more primitive approach. The place values used in hex are:
and you can use these to work out the equivalent decimal value quite easily.
For example, AD45 is simply:
A*4096 + D*256 + 4*16 + 5*1
to express it in a mix of hex and decimal, or moving entirely to decimal:
10*4096 + 13*256 + 4*16 + 5*1
which works out to:
40960 + 3328 + 64 + 5
To convert from decimal to hex is just a little more complicated in that you have to discover how many lots of 4096, 256 and 16 there are in a number. For example, 44357 contains 10 lots of 4096 because:
44357/4096 = 10.83
The remainder, i.e.
is 3397 and this contains 13 lots of 256 because:
3397/256 = 13.27
and so on to discover that the remainder contains 4 lots of 16 and 5 units. Writing 10 lots of 4096, 13 lots of 256, 4 lots of 16 and 5 units in standard hex gives AD45.
Fortunately it isn't often that you have to convert decimal to hex! |
A recent molecular analysis of ancestry across Latin America has revealed a marked differentiation between regions and demonstrated a “genetic continuity” between pre-and post Columbian populations. This study, published March 21 in the open-access journal PLoS Genetics, provides the first broad description of how the genome diversity of populations from Latin America has been shaped by the colonial history of the region.
The research involved the collaboration of teams at universities across Latin America, the US and Europe, led by Dr. Andres Ruiz-Linares from University College London.
The European colonization of the American continent, initiated in the late fifteenth century, brought with it not only social and political change, but also a dramatic shift from a Native American population to a largely mixed population. The genetic traces of this turbulent period in history are only now beginning to be explored with the molecular tools provided by the human genome project.
The researchers examined genetic markers across the human genome, in hundreds of individuals drawn from 13 mestizo populations found in seven Latin American countries. The picture obtained is that of a great variation in ancestry within and across regions, linked to and led by the colonization that occurred. It also appears that mostly Native and African women and European men contributed genes to the subsequent generations.
Interestingly, despite the fact that the European colonization occurred centuries ago, Latin Americans still preserve the genetic heritage of the local (in many cases now extinct) Native populations that mixed with the immigrants. This connection with the past has not been erased despite the current high mobility of individuals. Furthermore, it brings to life the “brotherhood” of each Latin American population to the Native populations that currently inhabit different countries.
In addition to providing a window into the past, the authors hope that these analyses will contribute to the design of studies aimed at identifying genes for diseases with different frequency in Native Americans and Europeans. Researchers have so far focused on populations from areas settled mainly by Native Americans and Europeans. The genomic diversity of populations across regions in the Americas with large African immigration is still mostly unexplored.
Ciation: Wang S, Ray N, Rojas W, Parra MV, Bedoya G, et al. (2008) Geographic Patterns of Genome Admixture in Latin American Mestizos. PLoS Genet 4(3): e1000037. doi:10.1371/journal.pgen.1000037 (www.plosgenetics.org/doi/pgen.1000037)
Source: Public Library of Science
Explore further: Ivory mafia: how criminal gangs are killing Africa's elephants |
Aerobic activity or endurance activity is any activity that raises your heart rate and keeps it up for a while.
This increases the amount of oxygen delivered to your heart and muscles, which allows them to work longer.
|Increases in:||Decreases in:|
How often and how long?
Experts say to do either of these to get and stay healthy:1
- Moderate aerobic activity for at least 2½ hours a week. Moderate activity means things like brisk walking, brisk cycling, or shooting baskets. But any activity that makes your heart beat faster—including daily chores—counts as moderate activity.
- Vigorous aerobic activity for at least 1¼ hours a week. Vigorous activity means things like jogging, cycling fast, cross-country skiing, or playing a basketball game. You breathe harder and your heart beats much faster with this kind of activity.
You can choose to do one or both types of activity. And it's fine to be active in several blocks of 10 minutes or more throughout your day and week. Do what works best for you. For example, you could do moderate activity twice a week for at least 1 hour and 15 minutes at a time. Or you could do 10 minutes 3 times a day, 5 days a week.
You could do vigorous activity 15 minutes a day, 5 days a week. Or you can try to do it once a week for 1¼ hours, or for 25 minutes a day, 3 days a week.
Moderate exercise is safe for most people, but it's always a good idea to talk to your doctor before starting an exercise program.
Start by doing a short warm-up, such as walking or riding a stationary bike. And stretch briefly.
Experts recommend that teens and children (starting at age 6) do moderate to vigorous activity at least 1 hour every day.1 And 3 or more days a week, what they choose to do should:
- Make them breathe harder and make the heart beat much faster.
- Make their muscles stronger. For example, they could play on playground equipment, play tug-of-war, lift weights, or use resistance bands.
- Make their bones stronger. For example, they could run, do hopscotch, jump rope, or play basketball or tennis.
It’s okay for them to be active in smaller blocks of time that add up to 1 hour or more each day.
How hard should you work?
To get the health benefits, you need to do your activity at a moderate pace, at least. Here's an easy way to know if you're working hard enough:
- If you can't talk and do your activity at the same time, you are exercising too hard.
- If you can sing while you do your activity, you may not be working hard enough.
- If you can talk while you do your activity, you are doing fine.
One way to know how hard you should exercise is to find your target heart rate. Being active within the range of your target heart rate not only helps you keep your heart and lungs healthy but also helps you get or stay fit. As a guideline, use the Interactive Tool: What Is Your Target Heart Rate?
The more aerobic activity you do, the healthier your heart will be. It won't beat as fast as it did before, even when you give the same amount of effort. This is a sign that you are becoming more fit.
The more aerobic activity you do, the more you'll be able to do without getting out of breath or feeling like your heart is pounding. You will be able to do activities such as playing with children, doing housework or yard work, or hiking without getting tired as quickly.
Walking for health
One of the best and easiest aerobic activities is brisk walking. You don't need special equipment, and you can do it almost anywhere.
A pedometer, which you can buy at a sporting goods store, can help you keep track of your activity. A pedometer will count the number of steps you take each day and help you set goals to walk more. Some people prefer letting the pedometer count the steps they walk, rather than trying to keep track of how many minutes they walk.
A good goal is to walk a total of 10,000 steps a day. Try wearing your pedometer every day for 1 week to see your usual number of steps. Then increase the number by up to 2,000 steps a day until 10,000 steps is comfortable for you. You can increase your walking in simple ways. These suggestions can get you started, and you can probably think of more ways to add more steps to your everyday activities.
- Park farther than usual from your workplace (or get off the bus or subway before your stop, and walk the rest of the way).
- Take the stairs rather than the elevator for one or two floors.
- Walk a lap inside the grocery store before you start shopping.
- Walk instead of drive for short trips. Walk to school, work, the grocery store, a friend's house, or a restaurant for lunch.
To keep walking interesting, find a new area to walk in. Allow yourself some extra time in case this walk takes longer than your usual route. Because new areas may pose some safety concerns, try a new area only during daylight, and choose well-populated areas, such as:
- Around your neighborhood. See some places you rarely see from your car. Meet some neighbors.
- Around a whole park. Try getting off the sidewalk. For example, walk around a baseball or soccer field.
- A mall.
- A track at a local school.
Walk at various times of day. Use "transition times" (times between activities when you don't have to be anywhere) to get out and walk, such as:
- After work, when you usually might sit in front of the TV.
- First thing in the morning. See a part of the day you might often miss.
- During your lunch or coffee break. Ask a coworker or a friend to join you for a walk. This can be a great energy boost.
Other aerobic activities
Other aerobic activities include:
- Aerobic classes, including step aerobics and spinning (indoor cycling) classes.
- Running or jogging.
- Cross-country skiing.
- Daily activities such as walking the dog or actively playing with children. These need to be done for at least 10 minutes a session at a moderate intensity.
- Water aerobics (which is especially good for older adults, those who are overweight, and those with joint problems).
- Sports such as tennis, basketball, or soccer.
Last Updated: August 13, 2009
Author: Cynthia Tank |
The phenomenon of binocular rivalry occurs when two distinctly different images are presented to each eye simultaneously. The observer though, is only aware of one image at a time. There is a random switching between the images from each eye, creating a rivalry between both eyes. Sorry mum, your figure of speech isn’t quite right!
To create binocular rivalry, each eye has to be presented with a different image. This can be achieved in several ways such as anaglyph glasses, mirror stereoscopes or prism glasses. For simplicity, I will be using anaglyph glasses, commonly known as 3D glasses.
In order to understand binocular rivalry, first let me explain how the brain processes images. Each of our eyes normally see slightly different images and the brain fuses these images into one. However, when presented with two distinctly different images, the brain does not merge these images, instead, suppressing one image and making the other dominant. This process continues indefinitely.
Visual attention flickers between the two images causing binocular rivalry. When the brain processes two distinctly different images, signals run through the optic nerve and to the brain, processing each individual image separately. The group of neurons processing the dominant image inhibits the group processing the suppressed image. After a while, the system fatigues and then the suppressed image becomes dominant. This processing of the images switches from time to time and continues indefinitely.
Now, I hope that you would have some understanding of binocular rivalry. I know what you are thinking. What is the use of this? Though there is no direct application of this phenomenon, it facilitates understanding of some of the complex mechanisms of the brain. Let me go through a few recent studies. There is no better way to explain the usefulness of this phenomenon.
We often find it hard to adjust to changes that we are faced with throughout our lives, but did you know our brain is adapting to change quicker than we think? Our brain is constantly changing, every minute, every second of our lives – adapting to the environment. This amazing ability, called neuroplasticity, is what makes it unrivalled in comparison to machines and computers.
Let me explain how Pisa Vision Lab utilized the phenomenon of binocular rivalry to test that the brain can easily adapt to change and makes new connections within short timespans. They measured binocular rivalry in normal settings and again after patching one eye for two and a half hours. The results showed that the patched eye was more dominant than the other.
The concentration of a neurotransmitter, Gamma Amino Butryic Acid, GABA for short, may trigger neuroplasticity.
The research concluded:
• The change in resting GABA strongly correlates with deprived eye perceptual boost
• A decrease in resting GABA triggers homeostatic plasticity in adult primary visual cortex
This is a classic example of how researchers use the phenomenon of binocular rivalry to test some of the amazing abilities of the brain such as neuroplasticity.
You may be wondering why I am meditating. What is this got to do with binocular rivalry? Well, consciousness is the state of being aware and alert in your surroundings. Among neuroscientists there has been contention around the precise identification of the neural correlates of consciousness.
The study of binocular rivalry attempts to understand this phenomenon. A recent study by Olivia Carter and Jack Pettigrew explores whether focused meditation can lead to a higher level of control and stability of mental processes. The results were quite interesting and concluded that meditation could actually slow down or even stop binocular rivalry. These results conclude that the fluctuations in visual perception can be stabilized through focused meditation. This creates a stabilization of mental processes and, is a step forward in our understanding of the neural correlates of consciousness.
The phenomenon of binocular rivalry can be used as a basis to understand neural processing in many different fields. Researchers use the concept of binocular rivalry to understand some of the complex mechanisms of the brain. So, not all rivalries are bad. Sometimes we can learn from them! So…. does that mean we can fight now? |
is a tower, building, or framework designed to emit light from a system of lamps and lenses or, in older times, from a fire and used as an aid to navigation and to pilots at sea or on inland waterways.
Lighthouses are used to mark dangerous coastlines, hazardous shoals and reefs, and safe entries to harbors and can also assist in aerial navigation. Once widely used, the number of operational lighthouses has declined due to the expense of maintenance and replacement by modern electronic navigational aids.
Where dangerous shoals are located far off a flat sandy beach, the prototypical tall masonry coastal lighthouse is constructed to assist the navigator making a landfall after an ocean crossing.
Often these are cylindrical to reduce the effect of wind on a tall structure, such as Cape May Light.
Smaller versions of this design are often used as harbor lights to mark the entrance into a harbor, such as New London Harbor Light.
Where a tall cliff exists, a smaller structure may be placed on top such as at Horton Point Light.
Sometimes, such a location can be too high – as along the west coast of the United States. In these cases, lighthouses are placed below clifftop to ensure that they can still be seen at the surface during periods of fog, as at Point Reyes Lighthouse.
Another victim of fog was Point Loma Light (old) which was replaced with a lower lighthouse, Point Loma Light (new).
As technology advanced, prefabricated skeletal iron or steel structures tended to be used for lighthouses constructed in the twentieth century. These often have a narrow cylindrical core surrounded by an open lattice work bracing, such as Finns Point Range Light.
Sometimes a lighthouse needs to be constructed in the water itself. Wave-washed lighthouses are masonry structures constructed to withstand water impact, such as the St. George Reef Light off California.
In shallower bays, screw pile ironwork structures are screwed into the seabed and a low wooden structure is placed above the open framework, such as Thomas Point Shoal Lighthouse.
As screw piles can be disrupted by ice, in northern climates steel caisson lighthouses such as Orient Point Light are used.
Orient Long Beach Bar Light (Bug Light) is a blend of a screw pile light that was later converted to a caisson light because of the threat of ice damage.
In waters too deep for a conventional structure, a lightship might be used instead of a lighthouse.
Most of these have now been replaced by fixed light platforms (such as Ambrose Light
) similar to those used for offshore oil exploration.
(Post a Comment |
What Is Solar Engineering?
Solar engineering is a field that focuses on the design, development, and installation of solar energy systems. It combines principles from various disciplines such as physics, electrical engineering, and materials science to harness the power of the sun and convert it into usable energy. Solar engineers work on a range of projects, from small-scale residential systems to large-scale commercial and utility-scale installations. Their goal is to maximize the efficiency and effectiveness of solar energy systems while considering factors such as cost, environmental impact, and long-term sustainability.
Solar engineering involves the use of photovoltaic (PV) technology, which converts sunlight directly into electricity. PV cells, made from semiconductor materials such as silicon, absorb photons from the sun’s rays and generate an electric current. Solar engineers design and optimize PV systems to maximize the amount of sunlight that can be converted into electricity. They consider factors such as the positioning and orientation of solar panels, the tilt and angle of the panels, and the use of tracking systems to follow the sun’s movement throughout the day.
In addition to PV technology, solar engineering also encompasses other solar thermal systems that utilize the sun’s heat to generate electricity or provide hot water and space heating. These systems typically use mirrors or lenses to concentrate sunlight onto a receiver, which heats a working fluid or generates steam to power a turbine. Solar engineers are responsible for the design and optimization of these systems, ensuring that they operate efficiently and safely.
Solar engineering also involves the integration of solar energy systems with existing electrical grids. Solar engineers must consider how to connect solar power plants and distributed solar installations to the grid, ensuring that the electricity generated can be effectively transmitted and utilized. They also work on developing energy storage solutions to address the intermittent nature of solar power and enable the use of solar energy during periods of low or no sunlight.
FAQs about Solar Engineering:
1. What qualifications are required to become a solar engineer?
– A bachelor’s degree in engineering, preferably in electrical or mechanical engineering, is typically required. Some universities also offer specialized degrees or certifications in solar engineering.
2. What skills are important for a solar engineer?
– Strong knowledge of electrical systems, renewable energy technologies, and computer-aided design (CAD) software is crucial. Problem-solving, analytical thinking, and communication skills are also important.
3. What career opportunities are available in solar engineering?
– Solar engineers can work in various sectors, including solar energy companies, engineering firms, research institutions, and government agencies. They can be involved in design, project management, system installation, or research and development.
4. How does solar engineering contribute to sustainability?
– By harnessing the power of the sun, solar engineering reduces dependence on fossil fuels, mitigates greenhouse gas emissions, and promotes clean energy production. It plays a crucial role in transitioning towards a more sustainable and environmentally friendly energy system.
5. What are the challenges in solar engineering?
– Some challenges include the high initial costs of solar installations, the intermittent nature of solar energy, and the need for efficient energy storage solutions. Additionally, solar engineering must address aesthetic concerns and integrate solar systems into existing infrastructure.
6. How does solar engineering impact the economy?
– Solar engineering contributes to job creation in the renewable energy sector. It also reduces reliance on imported energy sources, thereby improving energy security and potentially lowering energy costs for consumers.
7. What is the future of solar engineering?
– Solar engineering is expected to continue growing as renewable energy becomes increasingly important. Advancements in technology, such as higher efficiency PV cells and improved energy storage systems, will further enhance the viability and scalability of solar energy.
In conclusion, solar engineering is a multidisciplinary field that focuses on designing, developing, and optimizing solar energy systems. Solar engineers play a crucial role in harnessing the power of the sun to generate electricity and provide heating solutions. With the increasing demand for clean and sustainable energy, solar engineering is set to play a pivotal role in shaping the future of our energy systems. |
The radial artery is a major artery in the human forearm. It is close to the surface of the underside of the forearm; when the palm of the hand is pointing upwards, so is the radial artery. The radial artery supplies the arm and hand with oxygenated blood from the lungs. Due to the size of the radial artery, and its proximity to the surface of the arm, this is the most common artery used to measure a patient’s pulse. The pulse is checked at the wrist, where the radial artery is closest to the surface. The radial artery is also commonly used when drawing arterial blood for ‘Arterial Blood Gas’ (ABG) measurement. This is done for three reasons: firstly, it is not the only supplier of blood to the arm. If the radial artery is damaged, the ulnar artery will take over. Secondly, it is easy to access. Thirdly, the radial artery is a superficial artery; this means that damage is easily repaired and rarely endangers the patient. |
anatexis The partial or incomplete melting of a rock in response to an increase in temperature (at constant pressure), or a drop in pressure (at constant temperature). Melting takes place along grain boundaries, and the melt can either be extracted from the partially molten rock system, or can remain within the system. Typical examples of anatexis would be the generation of granitic melts (see GRANITE) by partially melting aluminous crustal rocks, and the generation of basalts by partially melting mantle peridotite.
More From encyclopedia.com
Metamorphism , Metamorphism is the process by which the structure and mineral content of rocks transform in response to changes in temperature , pressure , fluid co… Metamorphic Rocks , Metamorphic rock is rock that has changed from one type of rock into another. The word metamorphic (from Greek) means “of changing form.” Metamorphic… Diagenesis , diagenesis All the changes that take place in a sediment at low temperature and pressure after deposition. With increasing temperature and pressure,… Iron Ores , Rocks Rocks are composed of minerals, which are natural inorganic (nonliving) substances with specific chemical compositions and structures. A rock m… Migmatite , Migmatite A migmatite, or "mixed rock" in Greek, is a banded, heterogenous rock composed of intermingled metamorphic and igneous components. Veins, c… Partial Melting , A process known as partial melting generates the molten rock , known as magma , that cools to form crystalline rocks in the earth's outer composition…
About this article
Updated About encyclopedia.com content Print Article
You Might Also Like |
Allosaurus is one of the best known large carnivorous dinosaurs from North America and Europe. It hunted during the Jurassic, around 150 million years ago, and was the largest and most abundant predator in its ecosystem.
- History: Allosaurus fragilis was one of the first large meat-eating dinosaurs discovered and named in North America. Bones now regarded as probably Allosaurus remains were described in 1870 under the name of ’Antrodemus’, but the first definitive bones of Allosaurus were found in Cañon City, Colorado, and described in 1877 by the eminent American paleontologist, Othniel Charles Marsh of Yale university. Since then, the fossilized remains of at least 60 Allosaurus individuals have been found and described from across the USA, including some complete skeletons.
- Scientific Name: Allosaurus fragilis
- Characteristics: Allosaurus was a muscular carnivore with large jaws filled with sharp curved teeth. It had a low triangular projection above each eye, which our Allosaurus toy accurately reflects. It walked on two long legs with three-toed feet and sharp curved claws. It had short arms with three-fingered hands, and a long counterbalancing tail. The largest Allosaurus reached 32 feet long, the size of a minibus.
- Size: This Allosaurus toy figure is 7.5 inches long and 3.75 inches high.
- The Allosaurus is part of the Wild Safari® Prehistoric World collection
- All these products are Non-toxic and BPA free |
INTRODUCTION: Article Review: Human Rights in Lebanon
Article (4) of the Universal Declaration of Human Rights stipulates that “No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.”
By definition, a slave is a person who belongs, just as property does, to a person or to a group of persons. This means that a slave has no rights, no liberty and no independent existence. This also means that the owner or owners can sell the slave for money as well as order him or her to do any acts or involve in behaviors. Slaves simply do not have the right to say no to their masters.
The objective of this article was to emphasize the importance of human life, the human body, and the human being as a free individual who should not be enslaved by anyone. The human being is born free and should remain free. No human being or power should or can have the ability to confiscate human freedom in return for commercial gain.
Another objective of this article is to recognize the equality of all human beings regardless how they were born, where they were born and what they do in life. Slavery is a degradation of humanity and cannot be accepted in a world where the human being should be of greatest value.
We may imagine that there are slaves in parts of the world such as Africa or Asia, but who can imagine that slavery also exists in Lebanon?
Case Study: Young Slaves in Tripoli
Born to a very poor family in Tripoli, Fatima was given over by her family to a rich household where she would work as a maid in return for her food and shelter. The girl was virtually sold off by her impoverished parents, and her new masters did not pay her off any salary. For over a year, the child was treated as a slave. She was not allowed to leave the house except in the company of her mistress. She was reduced to servitude, having to carry out all kinds of household works from early morning until late at night. She was not paid any money since she was owned by the family.
Nor was Fatima sent to school like children of her age or given any chance to improve her life. As a matter of fact, her status as slave eventually led to inflicting upon her all kinds of horrifying and inhuman torture. Taken to hospital for obvious physical abuse, the medics found out that the child had been subjected to all kinds of severe and inhuman beating, including the extinguishing of lit cigarettes in her skin. The scars had been accumulated over the months and the child was suffering from serious malnutrition and symptoms of nervous breakdown.
Another case was reported late in November 1999, of another child, Khodr Kanjo, also from Tripoli. Khodr was taken to hospital in a critical condition after severe beating by his master and mistress. Like Fatima, Khodor was enslaved by a richer family where he was to work to earn his food and shelter, but basically to be owned by this family, both body and soul. The medical inspection of Khodor’s body showed that he had been subjected to a brutal treatment for several months, in addition to starvation and various forms of abuse. It was also obvious that the child was worked off and suffering from long hours of labor that will have serious implications on his physical development in the future.
The cases of Khodor and Fatima attracted a lot of attention because both of them are Lebanese and very young. But there are many others cases of slavery that take place in Lebanon, especially the enslavement of foreigners. Workers from Sri Lanka, Africa and various parts of the world are brought over to Lebanon, their passports held in the custody of the employer and their residence restricted completely within the household. These spend their days and nights working as slave servants, and the money they are paid hardly covers their living.
All these practices are abuses of human rights. Slavery is now taking a new turn in Lebanon, but in principle, it remains slavery where human dignity and pride are assaulted, stepped on, and ignored. Instead, the human being becomes nothing but an asset which is to carry out as much work as possible and with the minimal possible expenses. And if this machines fails or refuses to work, torture and brutality will be used to make it work again. How much longer can the Lebanese government deafen its ears to such violations of human rights? |
Lake Water Quality
Monitoring water quality in lakes and reservoirs is key in maintaining safe water for drinking, bathing, fishing and agriculture and aquaculture activities. Long-term trends and short-term changes are indicators of environmental health and changes in the water catchment area. Directives such as the EU's Water Framework Directive or the US EPA Clean Water Act request information about the ecological status of all lakes larger than 50 ha. Satellite monitoring helps to systematically cover a large number of lakes and reservoirs, reducing needs for monitoring infrastructure (e.g. vessels) and efforts.
The Lake Water Products (lake water quality, lake surface water temperature) provide a semi-continuous observation record for a large number (nominally 4,200) of medium and large-sized lakes, according to the Global Lakes and Wetlands Database (GLWD) or otherwise of specific environmental monitoring interest. Next to the lake surface water temperature that is provided separately, this record consists of three water quality parameters:
- The turbidity of a lake describes water clarity, or whether sunlight can penetrate deeper parts of the lake. Turbidity often varies seasonally, both with the discharge of rivers and growth of phytoplankton (algae and cyanobacteria).
- The trophic state index is an indicator of the productivity of a lake in terms of phytoplankton, and indirectly (over longer time scales) reflects the eutrophication status of a water body.
- Finally, the lake surface reflectances describe the apparent colour of the water body, intended for scientific users interested in further development of algorithms. The reflectance bands can also be used to produce true-colour images by combining the visual wavebands. |
A ruptured spleen is a life threatening medical emergency. It requires immediate medical attention. While surgery is not always necessary, timely treatment is critical.
The spleen is a small organ in the upper left part of your abdomen. It plays an important role in fighting infection, supporting immunity, and cleaning the bloodstream of bacteria and old blood cells.
Occasionally, the spleen can be injured. It can even rupture or tear. In the United States, about 40,000 people experience a spleen injury each year.
Facts about the spleen:
- The spleen is located in the upper left portion of the abdomen, just behind the ribs and stomach.
- The spleen helps filter out cellular waste, including old blood cells.
- It also helps fight infections and provides immunity support.
- The spleen can become enlarged if a person is sick or injured.
- A ruptured spleen occurs when there’s a break or tear on the spleen’s surface.
A ruptured spleen is usually the result of one of two things:
- a forceful blow or traumatic injury to the abdomen, or
- an enlarged spleen that tears
Even a minor injury can cause small tears or bruising to the spleen. But a severe injury could result in a break on the spleen’s surface — a rupture.
An enlarged spleen can make a tear or break in the spleen’s surface more likely. An enlarged spleen is often due to an existing disease or condition.
If you have an enlarged spleen, even minor trauma or injury could lead to a rupture.
Can the spleen rupture on its own?
A ruptured spleen is rare, but a spontaneous spleen rupture is even rarer. A spontaneous rupture occurs without any physical trauma or injury.
In most cases, an enlarged spleen is responsible for a spontaneous rupture or tear. Infections and certain medical conditions, such as malaria and lymphoma, can cause blood cells to accumulate in the spleen. As this happens, the spleen grows larger, putting stress on the surface of the organ.
Rarely, this pressure may be too much, and the surface of the spleen can break.
A ruptured spleen is a medical emergency. You should seek immediate medical attention if you have symptoms of a ruptured spleen.
The spleen is a complex matrix of blood vessels and blood-filled compartments (splenic cords and venous sinuses). In the event of a tear or rupture, internal bleeding is possible. This can be life threatening.
Even if you don’t have signs or symptoms of internal bleeding, your condition could change quickly. You could go from being stable to gravely ill in a matter of 24 to 48 hours if a ruptured spleen is not properly treated.
Symptoms of a ruptured spleen include:
- pain in the upper left abdomen
- tenderness in the upper left abdomen, especially when pressed
- left shoulder pain, especially if you’ve experienced no obvious trauma (Kehr’s sign)
- lightheadedness or dizziness
The symptoms of a ruptured spleen may not be obvious until significant internal bleeding has already occurred.
That’s why it’s important to get immediate medical attention if you have any of these symptoms, especially after an injury or if you’ve been previously diagnosed with an enlarged spleen.
If your doctor suspects a ruptured spleen, they will likely order a CT scan or ultrasound to confirm the diagnosis.
A CT scan can show any injury or damage to the spleen. It can also detect internal bleeding and a possible hematoma, or collection of blood under the spleen’s surface.
An ultrasound may also be helpful in an emergency situation. CT scans require patients to be more stable, but an ultrasound can be used quickly to rule out other issues.
If neither of these tests is able to confirm a diagnosis, your doctor may order a laparotomy. This surgical procedure allows a surgeon to explore the abdominal cavity. Images from the test may help determine what’s causing specific symptoms.
Treatment for a ruptured spleen
Treatment for a ruptured spleen typically falls into two camps: several days of intensive hospital care or surgery.
If the injury or damage to the spleen is too great, or if doctors are unable to stop the internal bleeding, a splenectomy is often the treatment of choice. In 10% to 15% of patients with a blunt spleen injury, surgical spleen removal is necessary.
However, in some cases, a more conservative approach is taken. Several days of hospital care and regular testing will be necessary to make sure the injury to the spleen doesn’t worsen.
Recovering from a ruptured spleen is not fast. You’ll likely need several days of hospital care before you’ll be released. Then, you’ll have regular follow-ups to monitor for changes or signs of secondary tears.
One study found that most people with higher grade spleen injuries (including tears) were completely healed within 75 days of their injury.
However, a few factors can slow down this timeframe. Underlying health conditions can impair recovery, as can additional trauma or injury.
A ruptured spleen can also worsen in a matter of a few days or weeks after the initial injury and may rupture again. That can complicate and slow down recovery, too.
After a splenectomy, your doctor may recommend a series of vaccinations. Without a spleen, your body may not be able to fight off infection as well as it did before, so these vaccinations will be important for preventing serious illness.
These vaccinations include:
- influenza vaccine: helps protect against the seasonal flu
- Tdap vaccine: helps protect against tetanus, diphtheria, and whooping cough
- Hib vaccine: helps protect against bacterial meningitis
- zoster vaccine: helps protect against shingles
- meningococcal vaccine: helps protect against meningitis
A ruptured spleen can be a life threatening condition and requires immediate attention. Symptoms most often include pain or tenderness in the upper left abdomen, pain in the left shoulder, dizziness, and confusion.
In most cases, a ruptured spleen is caused by blunt force trauma. This can be due to a car accident, a fall, a physical blow to the abdomen, or a sports injury. Less frequently, it can be caused by an enlarged spleen.
With proper treatment and follow-up care, recovery from a ruptured spleen is typically excellent. Some people may be able to recover from a ruptured spleen with hospital care and recovery time. Others may require surgery to remove the damaged spleen.
If you’ve been injured and have symptoms of a ruptured spleen, seek emergency care right away. |
For the past few months, most of us have been staying at home in an effort to practice social distancing. Many businesses have switched to working from home, and people have generally stopped traveling. The goal of social distancing has been to “flatten the curve” in regards to COVID-19, in an attempt to control the number of people who are sick at any given time. Over this period of time, scientists have discovered that social distancing has affected climate change and the rate at which climate change is progressing.
Social distancing and carbon emissions
A recent study found that daily global carbon emissions in April 2020 were 17% lower than the average daily emissions in 2019. This means that the average daily emissions decreased by 18.7 million metric tons of carbon compared to last year. This puts our carbon emissions roughly around where they were in 2006.
Why are carbon emissions lower?
The areas that produce the most emissions are China, the United States, and the European Union. These areas have been on some form of lockdown for the last few months. As a result, these countries are producing fewer carbon emissions.
The decrease in carbon emissions primarily has to do with the amount (or lack thereof) of traveling we’re doing. People are no longer commuting to work every day or traveling by plane nearly as much as they were. Countries that have enforced some form of mandatory national lockdown experienced a 50% daily decrease in ground transportation and a 75% daily decrease in air travel. This decrease can be attributed to social distancing standards, which say we have to avoid crowds and reduce contact.
However, while social distancing is working as a temporary measure, it will not last. Countries, including China, the United States, and those in the EU, are working to open up by relaxing social distancing guidelines. Their recovery plans are going to have a great impact on whether or not we continue to make progress.
Will this help climate change?
Stanford professor and Global Carbon Project chair Rob Jackson has predicted that global carbon output could fall by five percent over the course of the year, the lowest level since World War II. On some level, this is good news. Unfortunately, a recent United Nations report said that in order for us to prevent the worst effects of climate change, we would need emissions to drop at least 7.6% a year for decades.
In the past, such as after the global financial crisis, carbon emissions went down and then went up significantly after a period. It’s important to be aware of this trend as we move forward globally. The previously mentioned study said in its conclusion that “most changes observed in 2020 are likely to be temporary as they do not reflect structural changes in the economic, transport or energy systems.” Ultimately, social distancing is not enough. It’s a breather, rather than a real change. This is especially true considering that the way to address climate change isn’t to simply keep everyone home.
How can we stop climate change?
In order for this impact on global carbon emissions to stick, we would need to do significant work and keep climate change in our minds as we reopen and rebuild. For example, governments would need to do things like invest in cleaner energy. We would need to restructure society so we rely less on fossil fuels. Countries would need to work together through things like the Paris Agreement. We would also need to reconsider things like transportation and how we can improve our infrastructure.
According to NASA, preventing climate change would take both a global and regional/local effort. There would need to be a two-pronged approach during which we would work to lower global emissions and learn to adapt to the world as it is now. It’s possible that we have met the point of no return. We have gone past a series of tipping points, meaning we would need to learn to live in the world we have now, even as we actively work to save it. The key to climate change is ultimately our global emissions, so as we open back up, we must factor climate change into our decisions.
What can individuals do?
There are things that individuals can do. Staying home when possible, taking public transportation or biking, and eating all of the food you buy will help. Being aware of your own habits and how they affect the environment is always good. However, it would take significant and systemic change for climate change to be effectively curtailed. Individual action is not enough to stave off climate change. We must think globally.
Social distancing has created a good break for the environment, but we need more work for this change to have a real effect. |
1. Exploring local plants
What do your pupils know about local resources? This part looks at raising your pupils’ awareness of natural resources – particularly plant resources – that are found in their local area.
A good way to do this is to bring in local experts to talk, as in Case Study 1. Experts bring a specialised knowledge from which both you and your pupils can learn. Using experts also makes learning exciting because it is different.
In Activity 1, you heighten your pupils’ awareness of their local environment through field trips in which they are actively involved in gathering data. (If you are working in an urban area, or it is not safe to let your pupils walk out near the school, you could change the activity to look at food in the market. Ask pupils to each name five foods from plants and to try to find out where the food was grown.)
Case Study 1: Exploring important local resources
Mrs Hlungwane teaches in Hoxane Primary School in Limpopo Province in South Africa and wants her pupils to develop their understanding of their own environment and its natural resources. She has read about local expertise and knowledge about medicinal plants, and thinks looking at local plants, including those used for healing, might be a good way to extend the idea of resources from Section 2. She decides to contact the seven local plant experts who live near the school and invites them to come and be interviewed by her pupils on a set date. They agree to bring some of the important plants growing in the area to show the pupils.
Mrs Hlungwane divides the class into seven groups, each to interview one of the visitors. She discusses with her pupils the importance of showing respect. Together they draw up a list of questions to ask. She suggests that they find out the following three things about each plant:
- what it is called;
- where it grows around the village;
- its food or medicinal properties.
Afterwards, having thanked their visitors and said farewell to them, each group reports back and Mrs Hlungwane writes this information on the chalkboard in three columns:
- Plants that I find near the school
- Is this plant cultivated?
- Do we use this plant? If yes, how do we use it?
Next, they discuss how to protect these plants, as they are an important resource for the community. They decide that learning to identify the plants so that they do not pick them is important. Also, that they should not trample them or damage the locality where they grow.
Finally, Mrs Hlungwane asks the pupils, in groups, to make posters of the main plants, showing the uses of each plant and where it grows.
Activity 1: Finding out about local plant resources
- The table will help pupils focus on exactly what you want them to do.
- Ask each pupil to draw a table to record their observations. Draw the table on the board for them to copy.
- Send them out in pairs into the area surrounding the school for say 30 minutes and ask them to fill in at least five lines of the table. Walk around with your pupils and support them as they work.
- If pupils don’t know the names of plants, encourage them to describe and/or draw them for later identification.
- When they return to class, draw a big version of the table on the board.
- Go around the class and fill in all the pupils’ findings on the big table.
- Ask the pupils what they have discovered from today’s lesson about the natural environment and the kinds of resources it provides to the community. |
Kidney cells, as implied in its name, refer to the cells in kidney. Before knowing about kidney cells, we learn some knowledge of kidney. The kidney is an organ that is responsible for filtering the blood and making urine. The kidney is a powerful chemical factory that performs the critical regulation of the body's salt, potassium and acid content. Everybody has two kidneys that are located on either side of the spine at the lowest level of the rib cage with the size of a fist. Each kidney contains up to a million functional units named nephrons. A nephron consists of a filtering unit of tiny blood vessels called a glomerulus, which is attached to a tubule. When blood enters the glomerulus, it is filtered and the remaining fluid then passes along the tubule. In the tubule, chemicals and water are either added to or removed from this filtered fluid according to the body's demands, the final product is the urine excreted by us. Cells are the basic unit of organ formation and function. So what are the types of cells in kidney? Let’s start to know more about the types of kidney cells.
There are many different cell types in the kidney, usually including tubule epithelial cell, macula densa cell, glomerular endothelial cell, podocyte, mesangial cell and parietal epithelial cell (Figure 1).
Figure 1. The common types of kidney cells
Tubule epithelial cell refers to the outer layer of cells of the renal tubules, which is able to reabsorb all the glucose and amino acids in the glomerular filtrate and excrete other non-nutrients into the urine. It plays a critical role in renal function. Tubular epithelial cells is the main site of injury in metabolic and inflammatory diseases. These cells can secrete several inflammatory mediators including cytokines and chemokines, and actively participate in the acute inflammatory process by producing IL-8 to stimulate the differentiation of leukocytes.
Podocyte is terminally differentiated cell of the kidney glomerulus that is able to promote the development of glomeruli, resist intraglomerular pressure, maintain vascular loop shape and regulate glomerular filtration rate. Moreover, this cell also can produce VEGF to regulate endothelial cells, participate in inflammation and immune response, synthesize and decompose glomerular basement membrane. Dysfunction of glomerular podocytes and subsequent cellular death were found to be the driving forces behind disease initiation and progression, respectively .
Parietal epithelial cell (PEC) and visceral podocyte make up the epithelial cells of the renal glomerulus. PEC lines the inner surface of Bowman’s capsule. A large evidence has recently suggested that PEC represents a reservoir of renal progenitors in adult human kidney which generate novel podocytes during childhood and adolescence, and can regenerate injured podocytes . These evidence suggests that podocyte injury can be repaired.
Mesangial cell is a specialized pericyte that surrounds and constrains the vascular network within the glomerulus of the kidney. These cells are derived from the stromal mesenchyme, a progenitor population distinct from nephron stem cells . It has a variety of functlons, including synthesis and assembly of the mesangial matrix, endocytosis and processing of plasma macro-molecules, and control of glomerular hemodynamics via mesangial cell contraction or release of vasoactive hormones.
Macula densa cell is renal sensor element that detect changes in distal tubular fluid composition and transmit signals to the glomerular vascular elements. This tubuloglomerular feedback mechanism plays an important role in regulating glomerular filtration rate and blood flow. Macula densa cell detects changes in luminal sodium chloride concentration through a complex series of ion transport-related intracellular events .
Cell Markers refer to a series of proteins, which can distinguish the special type of cells from other types. In this part, we list partial secreted factors and cell markers of these types of kidney cells.
Reiser J, Sever S. Podocyte biology and pathogenesis of kidney disease [J]. Annu Rev Med. 2013;64:357-366.
Romagnani P. Parietal epithelial cells: their role in health and disease [J]. Contrib Nephrol. 2011;169:23-36.
Boyle SC, Liu Z, Kopan R. Notch signaling is required for the formation of mesangial cells from a stromal mesenchyme precursor during kidney development [J]. Development. 2014;141(2):346-354.
Bell PD, Lapointe JY, Peti-Peterdi J. Macula densa cell signaling [J]. Annu Rev Physiol. 2003;65:481-500.
Categories of Cell Marker
Cell Marker Related Articles
Cell Marker Related Pathways
Cell Marker Reagent Products |
Dementia is a term used to describe a decline in cognitive function, including memory, language, problem-solving, and decision-making abilities. It is often, but not always, associated with aging and can be caused by a variety of conditions, including Alzheimer’s disease, vascular dementia, and frontotemporal dementia.
Dementia is a progressive condition, which means that it typically gets worse over time. The rate of progression can vary widely, and some people may experience a slow decline while others may experience a more rapid decline. There is currently no cure for dementia, but there are treatments that can help manage the symptoms and improve quality of life for those affected by the condition.
It is important to note that not all memory loss or cognitive decline is due to dementia. Many other conditions, such as depression, medication side effects, and vitamin deficiencies, can also cause these symptoms. A thorough evaluation by a healthcare professional is necessary to determine the cause of cognitive decline and the appropriate course of treatment.
Symptoms of dementia can vary widely, and the specific symptoms may depend on the type and severity of the condition. Some common symptoms of dementia include:
- Memory loss: This is often the most noticeable symptom of dementia and can involve forgetting recent events, conversations, and appointments.
- Difficulty with language: People with dementia may have trouble finding the right words to use or may use the wrong words when speaking or writing.
- Difficulty with problem-solving and decision-making: Dementia can affect a person’s ability to think and plan, making it difficult to solve problems or make decisions.
- Changes in mood or behavior: People with dementia may become more agitated, anxious, or withdrawn, or may exhibit unusual or inappropriate behavior.
- Difficulty with spatial awareness: Dementia can affect a person’s ability to navigate familiar places, leading to disorientation or getting lost.
- Loss of motivation: People with dementia may lose interest in activities that they previously enjoyed or may have trouble initiating or completing tasks.
- Difficulty with communication: Dementia can affect a person’s ability to communicate, both in terms of expressing themselves and understanding others.
- Decreased ability to perform daily activities: Dementia can make it difficult to manage daily activities, such as bathing, dressing, and eating.
It is important to note that not all people with dementia will experience all of these symptoms, and the specific symptoms may vary depending on the type and severity of the condition. A healthcare professional can help identify the specific symptoms and determine the appropriate course of treatment.
Types of Dementia
There are several types of dementia, each with its own set of characteristics and underlying causes. Some common types of dementia include:
- Alzheimer’s disease: This is the most common type of dementia and is caused by abnormal protein deposits in the brain that lead to nerve cell death. It typically affects people over the age of 65 and is characterized by memory loss, difficulty with language, and changes in behavior and personality.
- Vascular dementia: This type of dementia is caused by reduced blood flow to the brain, often due to a stroke or series of small strokes. It can cause symptoms similar to those of Alzheimer’s disease, but the decline in cognitive function may be more sudden and may occur in specific areas, such as difficulty with planning and decision-making.
- Frontotemporal dementia: This type of dementia is caused by damage to the frontal and temporal lobes of the brain, which are involved in decision-making and behavior. It is typically diagnosed in people under the age of 65 and is characterized by changes in behavior and personality, such as becoming more impulsive or inappropriate.
- Dementia with Parkinson’s disease: People with Parkinson’s disease, a progressive neurological disorder, may also develop dementia. The symptoms of dementia in this case may be similar to those of Alzheimer’s disease or frontotemporal dementia and may include memory loss, difficulty with language, and changes in behavior and personality.
- Dementia with HIV/AIDS: HIV infection or AIDS can lead to a type of dementia called HIV-associated neurocognitive disorder (HAND). This type of dementia may cause symptoms such as memory loss, difficulty with concentration, and changes in behavior and personality.
Is There a Cure for Dementia?
Currently, there is no cure for dementia. Dementia is a progressive condition, which means that it typically gets worse over time. The rate of progression can vary widely, and some people may experience a slow decline while others may experience a more rapid decline.
While there is no cure for dementia, there are treatments that can help manage the symptoms and improve quality of life for those affected by the condition. These treatments may include medications to help with specific symptoms, such as memory loss or behavioral changes, as well as non-pharmacological approaches, such as therapy, support groups, and lifestyle changes.
It is important to note that the specific treatment plan will depend on the type and severity of the dementia, as well as the individual’s needs and preferences. A healthcare professional can help determine the most appropriate course of treatment.
Research into the causes and potential treatments for dementia is ongoing, and there is hope that scientists will eventually find a way to slow or halt the progression of the condition. In the meantime, it is important for people with dementia and their loved ones to work with a healthcare team to manage the symptoms and maintain a good quality of life.
How can I reduce the risk of dementia?
There are several things that you can do to reduce your risk of developing dementia or to delay its onset:
- Maintain a healthy lifestyle: Eating a healthy diet, getting regular exercise, and not smoking can help reduce the risk of developing dementia.
- Stay mentally and socially active: Engaging in activities that challenge the brain, such as reading, puzzles, and learning new skills, can help maintain cognitive function. Staying connected with friends and family and participating in social activities can also have a protective effect on the brain.
- Control risk factors for cardiovascular disease: High blood pressure, high cholesterol, and diabetes are all risk factors for both cardiovascular disease and dementia. Managing these conditions can help reduce the risk of developing dementia.
- Get enough sleep: Chronic sleep deprivation has been linked to an increased risk of developing dementia. Aim for 7-9 hours of sleep per night.
- Protect your head: Head injuries, especially repeated or severe injuries, can increase the risk of developing dementia. Wear a helmet when engaging in activities that carry a risk of head injury, such as biking or skiing.
It is important to note that these steps may not completely prevent the development of dementia, but they may help reduce the risk or delay its onset. If you are concerned about your risk of developing dementia, it is a good idea to discuss your concerns with a healthcare professional. They can help you develop a plan to maintain your cognitive health and reduce your risk of developing dementia. |
Mensuration is about the process of measurement. It is based on the use of algebraic equations and geometric calculations to provide measurements regarding width, depth and volume of a specific object or group of objects.
Units of measure include length, time and volume. These are all examples that can be quantified.
When looking at ‘standard’ units of measure, we are focusing on the units that are most frequently used to measure quantity, for example one second , one kilometre or one metre squared .
1 centimetre (cm) = 10 millimetres (mm)
1 Metre = 100 centimetres
1 Mile = 1.60934 kilometres (km)
One second (1s) is an example of a measurement of time.
One kilometre (1 km) is an example of a measurement of length.
(‘Kilometre’ is a unit used to measure long distances. ‘Metres’ are also used to measure long distances.)
One metre squared () is an example of a measurement of area.
One metre squared () is an example of a measurement of volume.
For example the distance between London and Manchester is 264 km.
The battery above is 2.6cm wide OR 26mm wide.
Quantities can be measured such as length, time and volume.
Long distances tend to be measured in kilometres (km) and miles.
Short distances tend to be measured in metres (m), centimetres (cm) and millimetres (mm).
Area is usually measured in km², m², cm² and mm².
Volume is usually measured in m³, cm³ and mm³.
The formula for speed is:
Speed is measured as m/s, km/h and mph.
Density is mass/volume and is measured in kg/m³ and g/cm³.
Pressure is force/area and is measured in Pascal (Pa).
Here are important unit conversions: |
Rules, Rules,ACTIVITY RULESRules Candy GameGrade Level: K-3 1. Next, tell the first person in each line to pass the candy from the frontIntroduction / Objectives: to the back of the line. Tell the lastThis activity will help students understand the need person to bring the candy up to the firstfor rules, the rulemaking process, and the role of the person in line.student / citizen. Students will be introduced to the 2. After students begin to play, interrupt the game relationship between rules and laws and how citizenscan establish laws in their communities, much like at intervals to give one of the following directions:rules in the classroom, to help them live together. • Oh, you must pass the candy with your eyesThe classroom constitution will provide a foundation forunderstanding and reinforcing the principles and ideals closed.which provide the framework for American democracy. • Oh, you must pass the candy with your left hand.This exercise may be most useful at the beginning of • Oh, everyone should be on their knees.the school year to establish a “democratic classroom” • Oh, you are to come backwards to me when youlearning environment. Work directly with the teacheras you plan to do this exercise. bring the candy.Instructional Strategies: 3. After each interruption, ask teams to begin again.Game, Brainstorming, Problem Solving, Reflective 4. Stop to review problems with the students.Thinking 5. Note that they had difficulties because of the wayStandards: Grades K-3: Civics and Government Grade K: Standard 1 SS.K.C.1.1, SS.K.C.1.2 that the rules were given. Lack of agreement about Grade 1: Standard 1 SS.1.C.1.1 the rules and constant change of direction lead to Grade 2: Standard 1 SS.2.C.1.2, Standard 3 SS.2.C.3.2 confusion. Ask students, “What is the purpose of Grade 3: Standard 1 SS.3.C.1.1, SS.3.C.1.3 having a rule? Who must follow the rules? What happens if a rule is broken? What happens if a rule Standard 3 SS.3.C.3.4 is not clear?” 6. Ask students, “How are rules made?” Write, “A rule should be clear and easy to follow,” on the board. Work with the group to develop a clear set of rules for the Candy Game. List the students’ suggestions, then vote to select a few simple rules for the game. 7. Play the game again to demonstrate that clear rules and directions make for a good experience while playing together. 8. Let students know that rules for children are a lot like laws for adults.Resources Needed: Part II:candy or erasers / some other item to use in the game,parchment paper if available, bulletin board “Making Connections / Tests for Good Rules” After playing the Candy Game with clear rules, ob-Part I: serve that to play games together children need rules and that to work and live together people also need to“Developing the Activity / Getting Started” have rules. Ask the students why people need rules in families,Begin by telling the students that you want them to play on playgrounds, on buses, in schools, neighborhoodsa new game to help them understand about government and communities. List the children’s responses andand laws. Divide the class into 3 or 4 straight lines and review all to consolidate ideas. Be sure to keep thisgive the first person in line a piece of candy, or another list posted.item. Then say, “Okay everybody, let’s play the “candy”game.” Don’t say anything else. The children willprobably look confused and ask how to play the game.Follow the directions to help the students see that theyneed to have rules in order to play the game. RULESTests for Good Rules listing some potential rules on the chalkboard. Talk over the rules together and write the rules on a chart for all to see. Guide students in testing their rules. See part two above. • The rule should be easy to follow. Let students know that laws are a lot like rules. Laws provide rules for life much like school rules provide • The rule should be simply stated. structure for our classroom. Tell students that they will use their classroom rules to develop a constitu- • The rule should include only activities we are tion. A constitution is a written plan to help establish and organize the rules we live by. able to perform. A constitution also organizes how our government • The rule must be reasonable. works. You can use your classroom constitution to help your classroom run smoothly. For awareness pur- • The rule must not go against another rule. poses, you may post copies of the U.S. Constitution • The rule should be fair. (federal government), “Rules are and the Florida Con- like laws for • The rule should help create a better place to stitution (state govern- little people.” ment) on a bulletin live and learn in school. board. Title the bulletin (Elementary Student) board, “Rules we liveAsk students how they can tell if a rule is a good by.”rule. List their responses and tie in to the rules theydeveloped for the Candy Game. Post responses and The students should now VOTE to accept or rejectadd others listed in “Tests for Good Rules” box, if not rules to include in the classroom constitution. Afteralready included. Demonstrate how rules might not voting on the classroom rules, develop these into apass these tests. Give an example of a good rule and a classroom constitution. For a preamble, considerbad rule. A bad rule example might be “Only boys can something like, “We the students in Mrs. Wesson’srun in the hall by the rear exit if they are third in line kindergarten class...”. Then incorporate the rules youand have lunch boxes.” Why is this a bad rule? Be- have agreed upon. Post prominently on large chartcause the rule is not fair to everyone and is not simply paper and have each child and teacher sign the Class-stated. The rule is also not easy to follow. room Constitution. Students should understand that their signatures represent a commitment to obey theFollow Up rules set forth in the constitution. If possible, giveAsk students if they have rules in their families. In each student a small classroom constitution on parch-their journals or as a homework assignment, ask ment paper to take home and discuss with their par-students to write a sentence and/or draw a picture of ents.one family rule. Have students talk with their parentsabout family rules. Develop a bulletin board with Post your Classroom Constitution on your bulletinrules. board once it is finalized. Refer to it often and con- sider changes as needed. Decide how the ConstitutionPart III: can be revised (i.e. majority vote of class?).“Creating A Classroom Constitution/Rules to live by”(Do with teacher’s permission and assistance if appropriate)For a class that already has rules (written or unwrit-ten), have the children begin by identifying and havethe teacher/recorder write down these rules. Then listany other rules the children believe the class needs.The group will then have two sets of rules “Rules WeHave” and “Rules We Need.” For a class that has notestablished any rules, start them off by discussing and Part IV: Life Without Rules Activity“The Relationship Between Rules and Laws” Note: In this supplemental activity, use after the follow upDiscuss with students that laws are a lot like rules and activity on page two. Have students read the followingthat citizens have a part in making laws just like they scenario and/or read it to the class. Then put the studentshad a part in making the classroom rules. Laws will together in groups of three to finish the story. Dependinghelp us live together and help keep us safe. They can on age and writing skills, this exercise may vary from gradealso help make sure we are treated fairly. to grade. Have students work together to continue writ- ing the story or they can draw pages to represent what lifeAsk students if they know any laws. List some on the might be like without rules. Finally others may want to actboard such as seatbelts, stop signs, and speed limits. out the dream. Be sure to share the work with all groups.Because we have so many people in our communi- Callie did not like all the rules she had in her life.ties, it is sometimes necessary to “elect” a few people She had rules about making her bed in the morn-to develop our community rules/laws for us. We vote ing and taking out the trash. She had rules aboutfor these people to help us make the rules/laws to live what time to go to bed and when to brush hertogether in peace and freedom. teeth. Callie had rules at home, at school, and even with her friends on the playground. She had rulesAt the school level, we vote for student government/ about what she could and could not eat and whatcouncil members. Locally, we elect city and/or county she could and could not wear. She had rules aboutcommissioners and school board members to help with using a seatbelt when she was in a car and using alocal rules and decisions. At the state level, we elect helmet when she was riding her bike. Callie hadrepresentatives and senators to make our state laws in rules when she played soccer and rules when sheTallahassee. Federal laws are made in Washington, went to a movie. Callie’s life was filled with rules.D.C. by U.S. Reresentatives and Senators. Rules, rules, rules.......Ask students to write or draw in their journals about Callie wondered what life might be like without anywhat life would be like without rules and laws (chaos, rules. That night as she went to sleep, she smiledconfusion, etc...). You may want to ask if students as she thought about how things would be differenthave seen the show Kid Nation. in her life without rules. As she dozed off, Callie entered a life without rules in her dream.Part V: Adapted in part from sources:“Solving a Community Issue(The Citizen’s Role)” Rules, Rules, Rules developed by David T. Taylor, et al, the Ohio Bar Association, 1980; Sure Fire Presentations, American BarIn this concluding activity, students will assist in Association Special Committee on Youth Education for Citizen-developing rules for their community. Tell students ship, 1993; and from Elementary Law Related Education Sourcethat they will need to develop rules for a pretend park. Guide, grades 3-6, Cleveland Public Schools, 1981.What rules will be needed to keep the park safe, clean, The Eraser Game (Adapted from “The Buckle Game” designedand fun for the children in the community? Ask each by Harriett Bickleman Joseph)child to draw the park and write/draw one rule for the Creating a Classroom Constitution Learning Experience forpark. Have each student read the rule they developed Elementary Law Related Education Beverly Dulaney, Joyceand show their picture. DeMasi, Jacquelyn Lendsey, Laura Ellen Stein. May 1980 Social Education Journal.Conclude with praise about what good citizens thestudents have been and that all citizens have a respon-sibility to help develop and test rules. It is also theresponsibility of citizens to follow the rules.
Rules in the Candy game. (Teachers) - Justice Teaching ...
Description: rule. List their responses and tie in to the rules they developed for the Candy Game. Post responses and add others listed in “Tests for Good Rules” box, if not
Read the Text Version
No Text Content!
- 1 - 3 |
Environmental medicine, a branch of medical science, stands at the intersection of human health and the environment. It focuses on understanding how environmental factors, such as pollutants, toxins, and climate change, impact human health and seeks innovative ways to mitigate these effects. In this comprehensive 3000-word blog, we will delve deep into the world of environmental medicine, exploring its principles, the key environmental health challenges we face today, and the promising strategies for a healthier future.
Understanding Environmental Medicine
Environmental medicine, also known as environmental health or environmental medicine, is a multidisciplinary field that examines the relationship between the environment and human health. Its primary objectives include:
1. Identifying Environmental Health Risks:
- Environmental medicine strives to identify and assess environmental factors that can harm human health. These factors may include pollutants, toxic chemicals, pathogens, and physical hazards.
2. Preventing and Mitigating Health Effects:
- The field aims to prevent and reduce the health effects of environmental exposures by implementing policies, regulations, and public health interventions.
3. Promoting a Healthy Environment:
- Environmental medicine advocates for sustainable practices and policies that protect both human health and the environment.
Key Environmental Health Challenges
Environmental medicine addresses a wide range of challenges and concerns that affect individuals and communities worldwide. Some of the most pressing environmental health issues include:
1. Air Pollution:
- Poor air quality, often due to the release of pollutants from vehicles, industrial facilities, and energy production, contributes to respiratory diseases, cardiovascular problems, and premature deaths.
2. Water Contamination:
- Contaminated water sources, contaminated with chemicals, heavy metals, and pathogens, pose significant health risks, including waterborne diseases and chronic health conditions.
3. Climate Change:
- Climate change leads to extreme weather events, rising temperatures, and shifts in disease vectors, impacting human health through heat-related illnesses, infectious diseases, and food insecurity.
4. Chemical Exposures:
- Exposure to hazardous chemicals in the environment, such as pesticides, industrial pollutants, and endocrine-disrupting compounds, can lead to various health issues, including cancer and developmental disorders.
5. Vector-Borne Diseases:
- Changes in climate and ecosystems affect the distribution of disease-carrying vectors like mosquitoes and ticks, increasing the prevalence of diseases such as malaria, Zika virus, and Lyme disease.
6. Food Safety and Nutrition:
- Ensuring food safety and access to nutritious foods is vital for preventing foodborne illnesses and addressing malnutrition and diet-related health problems.
7. Radiation Exposure:
- Exposure to ionizing radiation, whether from medical procedures, nuclear accidents, or natural sources, can lead to cancer, genetic mutations, and other health concerns.
8. Hazardous Waste and Pollution:
- The improper disposal of hazardous waste and the contamination of land and water sources can result in health risks for nearby communities.
The Principles of Environmental Medicine
Environmental medicine operates on several core principles that guide its approach to addressing environmental health challenges:
1. Precautionary Principle:
- The precautionary principle asserts that in the absence of scientific consensus, actions that could harm human health or the environment should be taken with precaution.
2. Primary Prevention:
- Emphasizing primary prevention, environmental medicine seeks to prevent environmental health risks from occurring in the first place through policies, regulations, and education.
3. Accountability and Transparency:
- Environmental medicine advocates for transparency in decision-making processes, accountability for polluters, and access to information about environmental risks.
4. Equity and Environmental Justice:
- Recognizing that vulnerable populations often bear the brunt of environmental health risks, environmental medicine strives for equity and justice in addressing these disparities.
5. Interdisciplinary Collaboration:
- The field relies on collaboration between various disciplines, including medicine, public health, ecology, and policy, to address complex environmental health challenges.
Environmental Medicine in Practice
Environmental medicine involves a range of activities and strategies to protect human health and the environment:
1. Epidemiological Studies:
- Epidemiological research investigates the relationships between environmental exposures and health outcomes, providing evidence for policy and regulatory decisions.
2. Environmental Monitoring:
- Continuous monitoring of air and water quality, as well as other environmental factors, helps identify trends and emerging threats to human health.
3. Risk Assessment:
- Environmental health professionals assess the risks associated with exposure to specific environmental contaminants and develop strategies to mitigate those risks.
4. Public Health Interventions:
- Public health interventions, such as vaccination campaigns, vector control, and water treatment, play a critical role in preventing environmental health threats.
5. Policy and Advocacy:
- Environmental medicine professionals engage in advocacy efforts to influence policies that protect public health and the environment, such as regulations on air and water quality.
6. Education and Outreach:
- Raising public awareness about environmental health risks and promoting sustainable practices is an essential component of environmental medicine.
Climate Change and Health
Climate change is a significant environmental health challenge with wide-ranging impacts on human well-being. Some key health effects of climate change include:
1. Heat-Related Illnesses:
- Rising temperatures can lead to heat-related illnesses, particularly in vulnerable populations, such as the elderly and children.
2. Vector-Borne Diseases:
- Changes in temperature and precipitation patterns affect the distribution of disease vectors, increasing the prevalence of diseases like malaria, dengue, and Lyme disease.
3. Respiratory Problems:
- Air pollution and allergen concentrations are likely to rise due to climate change, exacerbating respiratory conditions such as asthma and allergies.
4. Food and Water Insecurity:
- Climate change disrupts food production and water availability, leading to food and water scarcity and malnutrition.
5. Mental Health Impacts:
- The psychological toll of climate-related disasters, displacement, and uncertainty can have long-lasting effects on mental health.
Environmental Medicine and Sustainable Healthcare
Sustainable healthcare is an emerging concept that aligns healthcare practices with environmental stewardship and ethical principles. Key components of sustainable healthcare include:
1. Green Healthcare Facilities:
- Designing healthcare facilities with energy efficiency, reduced waste, and sustainable materials to minimize their environmental footprint.
2. Reducing Medical Waste:
- Implementing strategies to reduce, reuse, and recycle medical equipment and supplies to minimize waste generation.
3. Sustainable Medical Practices:
- Healthcare providers can adopt environmentally friendly practices, such as reducing unnecessary diagnostic tests and prescribing eco-friendly medications.
4. Promoting Public Transportation:
- Encouraging healthcare staff and patients to use public transportation or carpooling to reduce greenhouse gas emissions.
The Role of Individual Action
Individuals also have a role to play in promoting environmental health and sustainable living:
1. Reducing Carbon Footprint:
- By reducing energy consumption, conserving water, and adopting eco-friendly transportation options, individuals can contribute to mitigating climate change.
2. Minimizing Toxin Exposure:
- People can minimize exposure to environmental toxins by using safe household products, eating organic foods, and avoiding tobacco and excessive alcohol consumption.
3. Supporting Sustainable Practices:
- Individuals can support sustainable agriculture and businesses that prioritize environmental responsibility.
4. Advocating for Change:
- Citizens can advocate for environmental policies and regulations that protect both human health and the planet.
The Future of Environmental Medicine
Environmental medicine will continue to play a vital role in safeguarding human health and the environment as we confront ongoing and emerging environmental health challenges. Key trends shaping the future of environmental medicine include:
1. Technological Advancements:
- Advances in environmental monitoring technologies, data analytics, and remote sensing will enhance our ability to track and respond to environmental health threats.
2. Interdisciplinary Collaboration:
- Collaboration among scientists, healthcare professionals, policymakers, and communities will be essential to address complex environmental health issues.
3. Global Health:
- Environmental medicine will increasingly focus on global health, recognizing that environmental health challenges are interconnected across borders.
4. Climate Resilience:
- Strategies for building climate resilience in communities and healthcare systems will be crucial to adapting to the health impacts of climate change.
Environmental medicine is a dynamic and multidisciplinary field that bridges the gap between human health and the environment. It tackles complex environmental health challenges, from air and water pollution to climate change, with the aim of protecting both people and the planet.
In an era where environmental health threats are becoming increasingly evident, environmental medicine stands as a beacon of hope, guiding us toward sustainable practices, informed policies, and a healthier, more resilient future. As individuals, communities, and societies, we must embrace the principles of environmental medicine and work together to safeguard the well-being of current and future generations while preserving the precious ecosystems that sustain us. |
The partisan political cartoon above (Figure 8.1) lampoons Thomas Jefferson’s 1807 Embargo Act, a move that had a devastating effect on American commerce. American farmers and merchants complain to President Jefferson, while the French emperor Napoleon Bonaparte whispers to him, “You shall be King hereafter.” This image illustrates one of many political struggles in the years after the fight for ratification of the Constitution. In the nation’s first few years, no organized political parties existed. This began to change as U.S. citizens argued bitterly about the proper size and scope of the new national government. As a result, the 1790s witnessed the rise of opposing political parties: the Federalists and the Democratic-Republicans. Federalists saw unchecked democracy as a dire threat to the republic, and they pointed to the excesses of the French Revolution as proof of what awaited. Democratic-Republicans opposed the Federalists’ notion that only the wellborn and well educated were able to oversee the republic; they saw it as a pathway to oppression by an aristocracy. |
Origins of U.S. Government
- ARTICLES OF CONFEDERATION
- NORTHWEST ORDINANCE
- THE VIRGINIA, OR RANDOLPH, PLAN
- THE NEW JERSEY, OR PATERSON, PLAN
- CONSTITUTION OF THE UNITED STATES
- BILL OF RIGHTS
- FEDERALIST, NUMBER 10
- FEDERALIST, NUMBER 78
- THE VIRGINIA AND KENTUCKY RESOLVES
- MONROE DOCTRINE
After declaring their independence in 1776, the thirteen states had to determine both what type of central government they should form and how the individual states would be related to that central government. Their initial efforts to answer those questions resulted in the ARTICLES OF CONFEDERATION. The Articles were drafted in 1776 but were modified during the ratification process, which ended when the Articles went into effect on March 1, 1781.
The Articles of Confederation created a weak national government, which lacked both an executive and a judicial branch. The national government consisted only of a Congress, which prosecuted the end of the WAR OF INDEPENDENCE and negotiated the TREATY OF PARIS. By the end of the war, however, the Congress of the Confederation of the States found itself receiving less cooperation from the individual states. The Congress did enact the NORTHWEST ORDINANCE in 1787, which provided for the government of the Northwest Territory and established a procedure by which states could be carved out of the territory.
Dissatisfaction with the Articles of Confederation grew during the 1780s until Congress finally summoned a convention to amend and revise the Articles. All of the states except Rhode Island sent delegates to the convention, which convened in Philadelphia, Pennsylvania, in May 1787. A fundamental problem for the delegates was resolving a split between the states that favored a strong national government and those that preferred the strong state governments established by the Articles of Confederation.
As the convention debated the issues, it soon became apparent that a stronger national government was needed and that the Articles of Confederation would have to be replaced. A major conflict developed, however, between the large states, which favored a legislature apportioned by population, and the small states, which preferred a system under which each state would have an equal vote. The large states proposed the Virginia Plan, also known as the Randolph Plan, and the small states offered the New Jersey, or Paterson, Plan. At first, neither side would yield on the issue of representation. Finally, ROGER SHERMAN, along with OLIVER ELLSWORTH, proposed the Connecticut, or Great, Compromise, which called for a bicameral legislature with proportional representation in the lower house and equal representation in the upper house.
The U.S. Constitution was completed on September 17, 1787. It established three branches of government (legislative, executive, judicial) with an intricate set of checks and balances aimed at preventing one branch of government from gaining absolute control. The separation of powers is one of the hallmarks of the Constitution. The Framers did not, however, resolve the question of slavery. Southern states won the Three-fifths Compromise, which allowed them to count each slave as three-fifths of a white person in apportioning the House of Representatives and the electoral college.
Though opponents of the Constitution argued that it gave too much power to the national government, it was ratified by the requisite number of states by June 1788. GEORGE MASON, drafter of the VIRGINIA DECLARATION OF RIGHTS, and other STATES' RIGHTS advocates opposed ratification because the Constitution included no guarantees of basic personal liberties. In response, the first Congress convened under the Constitution in 1789 enacted the first ten amendments to the Constitution, known as the BILL OF RIGHTS.
During the ratification battle of 1787 and 1788, ALEXANDER HAMILTON, JAMES MADISON, and JOHN JAY wrote eighty-five short essays in support of the Constitution. The essays, known as the FEDERALIST PAPERS, sought to convince the voters of New York to persuade their legislators to vote in favor of the proposed federal constitution. The writers so clearly articulated the reasoning and scope of many of the constitutional provisions that the Federalist Papers have taken on lasting historical and legal significance.
The early years of the Republic saw a clash between the Federalists, led by Hamilton, and the Republicans, led by Thomas Jefferson. Jefferson and other proponents of strong state governments accused Hamilton and other advocates of a strong national government of going beyond the constitutional restrictions on the power of the national government. This debate escalated after the federal ALIEN AND SEDITION ACTS (1 Stat. 570, 596) were enacted in 1798. Jefferson and Madison prepared resolves, or resolutions, for the Virginia and Kentucky legislatures that proposed a "compact" theory of the U.S. Constitution. Under this theory state legislatures possessed all powers not specifically granted to the federal government, and states had the right to pass upon the constitutionality of federal legislation.
In the first years of the new nation, it was unclear whether the Supreme Court had the right to review an executive or legislative act and invalidate it if the act was contrary to constitutional principles. Article III of the Constitution was silent on the subject, but the Supreme Court settled the issue in 1803, when it ruled in MARBURY V. MADISON, 5 U.S. (1 Cranch) 137, 2 L. Ed. 60, that a particular act of Congress was unconstitutional.
The United States entered the field of international relations in 1823, when President JAMES MONROE enunciated a statement on foreign policy that has come to be known as the MONROE DOCTRINE. The Monroe Doctrine asserted U.S. dominance over the Western Hemisphere and warned European nations not to interfere with the free nations of the region.
- The Path of the Law - The Path Of The Law
- Northwest Ordinance - Northwest Ordinance
- Other Free Encyclopedias
Law Library - American Law and Legal InformationHistorical Legal Documents and Landmark Speeches |
The ways of acquisition of group or ethnic identities is quite complex. The way in which people assign group or ethnic identities to others is also intricate and not straightforward. Ethnic or racial labels in the United States have no known, clear criteria. In that regard, it possible for people to assume ones identity or to use a criterion that is inaccurate or offensive. Such may form a hindrance to open discussion or communication, albeit unintentionally. Understanding the psychological reasoning behind such classifications is essential in filling the theoretical and empirical gaps. It is worth noting that simply determining or knowing a person's ethnicity does very little in defining or explaining their specific emotional, cognitive, social, and mental outcomes. In essence, it is impossible to know a person's character or behavior based simply on their perceived ethnicity.
Understanding the diversity of the US or any other country heavily depends on the internalization of the criteria used to make the existing distinctions. Biological and the cultural aspects are the key determinants of the distinctions. People tend to view others based on economic class, age, gender, religion, race, ethnic background, and race. The most important criterion varies with leaning of a particular group. For example, in the contemporary setting in the US, gender, race, and ethnicity have the most ranging impact on the citizenry. Ethnicity mostly refers to both physical and cultural characteristics that people use to classify others based on certain differences or distinctions. The most commonly recognized ethnic groups in America include but not limited to Latinos, African Americans, American Indians, and European Americans. In some cases, ethnicity entails loose group identity that has no specific cultural traditions in common. Such is the case with German and Irish Americans. Nevertheless, certain groups consist of coherent subcultures with a body of traditions and a shared language. New immigrants form part of the latter category.
Ethnic groups may form either the minority or the majority of the population, and such is note an absolute fact since it depends heavily on the perspective. For example, in the Southern borders of the US, the Mexicans form the majority, meaning that they control a significant aspects of political, social, and the economic composition of those areas, yet the government still considers them as the minority. For many people, ethnic categorization may imply the connection between the culture and the biological inheritance, which is not the case. People learn cultural trait, which means that they have nothing to do with one's genetic composition.
As stated, ethnicity manifests in different ways, meaning that there is need for new approaches in the understanding of the ethnic groups especially in the current era of globalization and increased immigration activities. The definition of culture or ethnicity should be comprehensive enough to encompass areas such as status, identity, and culture. There is need for additional research to determine precisely the cultural values and norms that distinguish ethnic groups, group status, and the variation in response to some experiences. In cases of comparing two or more groups, the efforts should mainly focus on matching to minimize the possibility of magnifying the differences in the demographic variables. The best method to create harmony among ethnicities is to hold it constant, meaning that learning the other group is beneficial that drawing comparisons (Phinney, 1996). Recognizing that ethnicity is far more complex than earlier perceived is a positive step towards understanding the cluster formation that defines ethnic groups.
Phinney, J. S. (1996). When we talk about American ethnic groups, what do we mean? American psychologist, 51(9), 918.
Cite this page
Essay Sample on The Meaning of Ethnicity. (2022, Dec 07). Retrieved from https://proessays.net/essays/essay-sample-on-the-meaning-of-ethnicity
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Does Our Knowledge of Causal Relations Derive From Experience?
- Report on a Classical Music Concert Paper Example
- Buddhism, Islam, and Christianity in Southeast Asia Essay Example
- MSc Social and Public Communication: Personal Statement
- Essay Sample on Origin of Race and Racism
- Essay Example on Tying the Knot: Ordinary Acts in Marriage Ceremonies in Films
- Essay Example on Music & Resistance: Curtis Mayfield's People Get Ready |
Contrary to popular belief, math requires creativity, as well as analytical thinking. Most people think that there’s only one way to do the math. But in actuality, there are often a number of different solutions you just need the creativity to see them. We all have what it takes to excel in areas that don’t seem to come naturally to us at first and learning them does not have to be as painful as we might think! Here are 10 tips on how to be excel at math and science.
- Switch It Up: Excel at math and science requires two types of thinking modes: the focused mode and the diffuse mode. Focused mode is direct, rational, and being completely into the subject. A Diffused mode is more of a big picture way of thinking, where you are relaxed and let your mind wander. Struggling to learn something new? Try to switch between intense focus mode and relaxed diffuse mode then back again. Exercise, draw, paint, or play music between study sessions are great diffuse mode activities!
- Take a picture walk: Before diving into the study material glance through the chapter or the test questions. This helps to create mental hooks to hang your thinking on and really understand the concepts.
- Ask for help: Stuck on a problem or having a difficult time understanding a concept? Ask a friend or a teacher to explain it to you. Their way of thinking may just be the solution to your problem!
- Write it out: If you are struggling to remember information, writing it out will help to more deeply encode what you are trying to learn.
- Spaced Repetition: Go over the subject for a few minutes every day for several days after you have studied it.
- Take a step back: When you find yourself getting frustrated take a mental step back. Frustration is a sign that you need to take a break!
- Think like an actor: Actors tend to memorize scripts by understanding the characters emotions and motivations rather than directly memorizing the lines. Try to understand the concepts before you try to memorize.
- Practice interleaving: doing a mixture of different kinds of problems requiring different strategies. Once you feel you know a concept, try to recall or practice it in different ways, not just the way you learnt it. Even trying to recall the material in a different order helps!
- Take it outside: Try to recall the material when you are outside your usual place of study to help you strengthen your grasp of the material by looking it at from a different perspective.
- Don’t procrastinate: When you leave things until the last minute you end up with just enough time to do the superficial learning and it also increases your stress level. This means any brain connections made to understand the topic are faint and broken and will also disappear quickly. Give yourself plenty of time to study so you can understand and remember better!
Check out Smore Magazine for ages 8+ and get the little ones excited about learning math and science with tips and methods.
Adapted with permission from A Mind for Numbers by Barbara Oakley, Ph.D. (Penguin Random House). This article first appeared in Smore Issue #13. Get your copy of Smore here (www.smoremagazine.com/subscribe) |
Download a desktop version of this image
How did NEAR Shoemaker make this inference, and what might it mean for the history of Eros?
NEAR Shoemaker has made the first detection of x-rays from an asteroid. These x-rays are not generated by Eros in the sense that the sun generates x-rays, or that x-ray tubes and synchrotrons on Earth generate x-rays (all of these produce x-rays by first creating high energy electrons, and then smashing them into a target or bending their paths in a magnetic field).
Eros produces x-rays when illuminated with x-rays from the sun, but this emission process is fundamentally different from that which produces the reflected sunlight imaged by the camera. An x-ray photon from the sun can be absorbed by a single atom in the surface of Eros, if the photon has enough energy to eject an electron from the atom (the so-called photoelectric effect).
We are interested in the case when the electron is ejected from the innermost shell. The atom is now missing an electron, and the vacancy in the innermost shell is quickly filled when another electron in a higher energy shell (usually, the next higher shell) drops in, emitting a new x-ray photon at one of the characteristic energies of the atom.
This energy is usually close to, but less than, that of the original, incident x-ray photon. The filling of the first vacancy can, of course, create another vacancy, leading to emission of another characteristic photon (at lower energy) when the new vacancy is filled, and so on.
This entire process, in which the absorbing material emits a spectrum of characteristic radiations after a photoelectric absorption, is called fluorescence.
It is how ordinary fluorescent lamps work. Eros is acting as a fluorescent lamp, except that it glows in x-rays rather than visible light. The ordinary (diffuse) reflection process (meaning that by which Eros shines in visible light) also occurs for x-rays, but fluorescence is more important for x-rays.
The characteristic x-rays are the fingerprints of Fe, Mg, Si and other key elements at the surface of Eros. By measuring the strengths of these emissions, NEAR Shoemaker measures the numbers of the corresponding atoms in the surface.
This measurement tells us about differentiation, because if the kind of differentiation we are interesed in for Eros has occurred, it would affect the relative proportions of these elements.
Specifically, we are looking for metal-silicate differentiation, in which iron-nickel liquids settle down into the center of the body, leaving behind upper layers of silicate rock.
This process naturally depletes the iron at the surface relative to silicon, for example, together with any species that would preferentially dissolve in the iron-rich melt.
This type of differentiation occurred at Earth, together with other types -- the silicate layers outside Earth's core differentiated further into a mantle and a crust of distinct compositions. In the outer solar system, still more types of differentiation are found, such as between a rocky silicate core and an icy exterior for Ganymede at Jupiter.
NEAR Shoemaker's x-ray measurements so far tell us that Eros is not differentiated, which means that iron-nickel grains on Eros are still intimately mingled with silicate grains, as they would be in primitive meteorites like the ordinary chondrites.
There are complications, of course -- some meteorites are interpreted as products of partial melting or differentiation, and NEAR Shoemaker will need more data to explore that possibility.
Why do we talk so much of differentiation? It is a milestone in the evolution of all the terrestrial planets (Mercury, Venus, Earth and its Moon, and Mars), which are differentiated bodies.
However, the ~500 km asteroid 4 Vesta is also differentiated and perhaps deserves to be considered as another terrestrial planet (only historical accident truly disqualifies Vesta as a planet).
It would be satisfying, but untrue, to say that planets must be differentiated bodies -- Pluto is called a planet, but we do not know if it is differentiated.
In any case, larger bodies tend to be more readily differentiated, because they trap heat and because they have stronger gravity, but we do not know how small a body can be and still become differentiated, nor do we know how large one can be and still avoid differentiation.
There is at least one asteroid even larger than Vesta, 1 Ceres, that is believed to be undifferentiated.
Hence, we can say from the NEAR Shoemaker x-ray data that Eros is not completely differentiated, and that if Eros was once part of a much larger parent body, that parent was also undifferentiated.
We cannot say whether this parent body ever was or was not larger than Vesta. There are many forks in the road to becoming a planet, and we have yet to see most of them.
Near Shoemaker Shifting Momentum
Laurel - June 19, 2000 - While it slowly orbits asteroid Eros, NEAR Shoemaker wages somewhat of a shoving match with the sun. Solar radiation constantly torques the small satellite -- which fights this force with four internal spinning wheels that act like gyroscopes, keeping the spacecraft's solar panels and scientific instruments pointed in the right direction. |
In the late 1800s and early 1900s pearling was a key industry across northern Australia, from the Torres Strait to Western Australia. Australia supplied most of the world’s demand for pearl shell, which was exported to Europe and the United States. The town of Broome, in the Kimberley region of Western Australia, became the center of Australia’s industry and the pearling capital of the world. The industry had a dark side, however: it was built on the exploited labor of Aboriginal and Torres Strait Islander peoples and migrant workers from Asia.
Pearling has a long history in Australia. Aboriginal peoples along the northern coast harvested and traded pearl shell more than 20,000 years ago. European colonists became interested in pearling after finding pearl oysters in the waters near Western Australia in the 1850s. The industry took off with the discovery of a particular type of oyster off the northwestern coast in 1861. That oyster—the South Sea pearl oyster (Pinctada maxima)—is the largest species of pearl oyster in the world. At the time it was valued for its pearls but much more so for its mother-of-pearl—the beautiful material in its shell that was used to make buttons, buckles, cutlery handles, jewelry, and inlay for furniture. A settlement that grew on the coast became the town of Broome in 1883. By 1910 Broome was the largest pearling center in the world.
Pearling began in the Torres Strait in 1868 after pearl oysters were discovered there. By the mid-1870s more than 100 pearling boats were operating in the area, with more than 1,000 workers. The colony of Queensland, recognizing the value of the industry, annexed the Torres Strait Islands in 1879. The Queensland pearling industry had a setback in 1886 when many of its pearlers, fearing the depletion of oysters in the Torres Strait, left for Western Australia. The industry recovered in the 1890s.
Pearling drew many people to Broome and the Torres Strait in the late 1800s and early 1900s in search of work or fortune. People of European descent built and operated the pearling ships, called luggers. They used Aboriginal and Torres Strait Islander peoples as well as migrants from Asia to do the dangerous work of pearl diving.
Indigenous Australians made up most of the labor force in the first two decades of the industry. Pearlers rounded up local Aboriginal and Torres Strait Islander peoples and forced them to work for no pay. At first the workers gathered shells from beaches and shallow waters, a practice called dry shelling. When those supplies were depleted, boats carried the workers out to deeper waters. There, they were forced to dive to collect oysters without any breathing equipment. Pregnant women were preferred because it was believed that they had a larger lung capacity. Those naked divers—known as skin divers—experienced brutal treatment, and many drowned.
The introduction of diving suits in the 1880s changed the pearling industry. The suits enabled divers to work in deeper water and to stay underwater longer. Pearlers took advantage of that technology by shifting their workforce from Indigenous divers to more skilled divers from Asia, especially Japan. Despite the use of breathing equipment, the work was still very dangerous. Many divers died from drownings, shark attacks, or tropical cyclones. Another threat was decompression sickness, also known as the bends. That potentially crippling or fatal condition results from ascending too quickly after diving to great depths. So treacherous were the conditions that some historical sources say that half of the divers died.
By 1914 Broome supplied about 80 percent of the world’s pearl shell. It had more than 300 luggers with crews representing a wide range of ethnicities, including Japanese, Chinese, Malays, Sri Lankans, Filipinos, and Indigenous Australians. Many of the Asians were indentured workers. That means that they worked for free while paying off a debt, which was generally the cost of their transportation to Australia.
The diversity of the workforce is particularly notable considering that Australia had effectively banned Asian immigration years earlier with the White Australia Policy. After introducing the policy in 1901, the government tried to reduce the pearling industry’s reliance on Asian labor by bringing in a dozen divers from the British navy. Most of them died. To continue the supply of cheap labor from Asia, the government made pearling an exception to the policy.
The heavy reliance on Japanese divers led to problems for the pearling industry during World War II (1939–45). After Japan entered the war on the side of the Axis Powers, the Australian government put most of the country’s Japanese residents in internment camps. The loss of the Japanese divers caused a huge decline in production. The industry suffered another setback in the 1950s with the introduction of plastic buttons. As manufacturers turned to plastic as an inexpensive alternative to mother-of-pearl, the market for pearl shell collapsed. The shift had an especially devastating effect on the industry in the Torres Strait.
Today, Australia’s pearling industry is based on the cultivation of pearls. Japanese scientists pioneered the practice, which was adopted in Broome beginning in the 1950s. Divers collect pearl oysters, especially Pinctada maxima, from the sea and bring them to oyster farms. A bead is implanted into each oyster, and the oysters are put back in the water. The beads encourage the oysters to form pearls, which are harvested to make jewelry. Broome continues to be a world leader in the production of South Sea pearls. The pearling industry has also had a lasting impact on the town’s ethnic mix. Descendants of the Asian immigrants attracted by the industry still make up a large part of Broome’s population. |
Graphene is a two-dimensional carbon allotrope. It is composed of carbon atoms positioned in a hexagonal design, which can be said to resemble a chicken wire.
A single layer of carbon atoms arranged in such a honeycomb structure forms a single graphene sheet. Several sheets stacked one on top of the other are regarded as multi-layer graphene, up to the point where the material becomes graphite (usually over about 30 layers, although clear standardization is severely lacking at the moment). Graphite, a 3D crystal composed of weakly coupled graphene layers, is a relatively common material - used in pencil tips, batteries and more.
In graphene, each carbon atom is covalently bonded to three other carbon atoms. Thanks to the the strength of the covalent bonds between carbon atoms, graphene boasts great stability and a very high tensile strength (the force in which you can stretch something before it breaks). Since graphene is flat, every atom is on the surface and is accessible from both sides, so there is more interaction with surrounding molecules. Also, the carbon atoms are bonded to only three other atoms, although they have the capability to bond to a fourth atom. This capability, combined with the aforementioned tensile strength and high surface area to volume ratio of graphene may make it appealing for use in composite materials. Graphene also enjoys electron mobility that is higher than any known material and researchers are developing methods to use this property in electronics.
Using graphene, it should someday be possible to make transistors and other electronic devices that are much thinner than devices made of traditional materials, and this is only one example of graphene’s potential in the electronics field. Since graphene is electrically conductive, transparent, strong, and flexible, it may also be an attractive material for use in touch screens. Graphene also has very high thermal conductivity and so, could be used to dissipate heat from electronic circuits.
Graphene as the basis of other carbon structures
Graphene can be a parent form for many carbon structures, like the above-mentioned graphite, carbon nanotubes (which can been viewed as rolled-up sheets of graphene formed into tubes) and buckyballs (spherical structures with a cage-like structure made from graphene only with some hexagonal rings replaced by pentagonal rings).
Graphene is one of the first and most famous examples of a 2D crystal. Two-dimensional materials and systems are fundamentally different from three-dimensional ones in many ways. Graphene can be used as a model system for studying two-dimensional physics and chemistry in general, and so has been attracting much academic interest since its isolation in 2004. It is also considered to have tremendous potential for a myriad of applications, like next-gen batteries, sensors, solar cells and more - thanks to a wide array of properties, some of which have been already mentioned in this article, like excellent electrical and thermal conductivity, mechanical strength, unique optical properties and more. |
As the incidence angle of the ERS SAR is oblique (23) to the local mean angle of the ocean surface, there is almost no direct specular reflection except at very high sea states.
It is therefore assumed that at first approximation Bragg resonance is the primary mechanism for backscattering radar pulses.
The Bragg equation defines the ocean wavelengths for Bragg scattering as a function of radar wavelength and incidence angle:
The short Bragg-scale waves are formed in response to wind stress. If the sea surface is rippled by a light breeze with no long waves present, the radar backscatter is due to the component of the wave spectrum which resonates with the radar wavelength.
The threshold windspeed value for the C-band waves is estimated to be at about 3.25 m/s at 10 meters above the surface. The Bragg resonant wave has its crest nominally at right angles to the range direction.
For surface waves with crests at an angle ϕ to the radar line-of-sight (see the figure on the left) the Bragg scattering criterion is
where: λ's is the wavelength of the surface waves propagating at angle ϕ to the radar line- of sight.
The SAR directly images the spatial distribution of the Bragg-scale waves. The spatial distribution may be effected by longer gravity waves, through tilt modulation, hydrodynamic modulation and velocity bunching.
Moreover, variable wind speed, changes in stratification in the atmospheric boundary layer, and variable currents associated with upper ocean circulation features such as fronts, eddies, internal waves and bottom topography effect the Bragg waves. |
Why Do We Yawn?
The short answer is that no one really knows.
The long answer is that no one really knows, but there are plenty of interesting theories:
1. The idea that we yawn to get rid of carbon dioxide and take in more oxygen has been disproved by research, but persists as the "common wisdom" answer. According to this theory, people breathe more slowly when they're bored or tired and less oxygen gets to the lungs. As CO2 builds up in the blood, the brain reflexively prompts a deep, oxygen-rich breath.
The problem with this theory is a 1987 study by Dr. Robert Provine, who is regarded as the world's foremost yawn expert. Provine set up an experiment in which volunteers breathed one of four gases that contained varying ratios of CO2 to O2 for 30 minutes. Normal air contains 20.95% oxygen and 0.03% carbon dioxide, but neither of the gases in the experiment with higher concentrations of CO2 (3% and 5%) caused the research subjects to yawn more.
2. Last year, a team of researchers at the University of Albany proposed that the purpose of yawning is to cool the brain. They conducted an experiment similar to Provine's and again found that raising or lowering oxygen and carbon dioxide levels in the blood did not change the amount or length of yawns.
Subsequent experiments focused on two well-established brain cooling mechanisms: nasal breathing and forehead cooling. When you breathe through your nose, it cools the blood vessels in the nasal cavity and sends that cooler blood to the brain. Likewise, when you cool your forehead, the veins there, some of which are directly connected to the brain, deliver cooler blood. The researchers found that their test subjects with warm or room temperature towels pressed against their heads yawned more than those with cold towels. Subjects who breathed through their noses during the experiment did not yawn at all.
The researchers said their evidence suggests that taking in a big gulp of air with a yawn cools the brain and maintains mental efficiency.
3. Another theory says that yawning has more to do with sociology than physiology and also tackles the question of contagious yawning.
Almost all vertebrates yawn spontaneously, but only humans, chimps and macaques yawn as a result of watching another individual do it. Given that these are social creatures that live in groups, the contagious yawn may have evolved as a way to coordinate behavior and maintain group vigilance. When one individual yawned, the group took that as evidence that their brain temperature was up and their mental efficiency was down. If all members of the group then yawned, the overall level of vigilance in the group was enhanced. In humans, who have color-coded charts to signal how vigilant they should be, yawns may still be contagious as a vestigial response.
While yawns are still largely a mystery, here are some things we know for certain:
"¢ The average yawn lasts about six seconds.
"¢ In humans, the earliest occurrence of a yawn happens about 11 weeks after conception "“ while we're still in the womb.
"¢ Your heart rate can rise as much as 30% during a yawn.
"¢ 55% of people will yawn within five minutes of seeing someone else yawn.
"¢ Blind people yawn more after hearing an audio tape of people yawning.
"¢ Reading or even thinking about yawning can cause you to yawn.
"¢ While researching and writing this story, I yawned 37 times. |
Beg – Low Int
In this lesson, students study the form and use of the present progressive tense. They will be engaged by the speaking and acting exercises and challenged by the exercises that combine the simple present and present progressive tenses. Non-action verbs are also covered.
For teaching teens and adults, use our Grammar Practice Worksheets lesson on the present progressive. |
Laminated glass typically consists of 2 sheets of glass with an interlayer between them. This configuration makes the glass stronger but if it does get broken, the pieces of glass stay together making it safer than normal glass.
These safety and strength qualities make it the idea material for glazing vulnerable areas such as car windscreens, skylight windows and areas likely to withstand hurricane winds.
If the glass breaks it normally holds together on the frame whilst the glass cracks. This cracking can be a slow process if the initial damage to the window was minimal. The glass can also help to prevent ingress of objects. Anyone near the breakage is protected from the glass shattering and causing injury.
The interlayer is usually made from polyvinyl butyral or PVB. Normal float glass is used, often the thickness of each glass layer is 3mm. The PVB layer is .38mm. The glass is therefore sometimes known as 6.38 laminated glass.
Instead of normal glass, toughened glass and polycarbonate can also used to increase strength - for bullet proof glass, for instance. The configuration can increase the thickness of the glass to as much as 10mm.
Apart from strength and safety, laminated glass has improved noise reduction qualities because the PVB dampens sounds. The glass also filters out 99% of harmful UV sunlight.
How to Cut Laminated Glass
The traditional method
of cutting the glass is to firstly cut the glass layers and then pour a flammable liquid (typically methylated spirits) on to the PVB layer and set it alight.
As this can be dangerous, there are a number of safer options now available. A beter alternative is to use specially designed laminated cutting tables or use vertically inclined saws.
Return from Laminated Glass to Glass facts page |
Life Cycle of a Lime Tree
The lime tree is part of the citrus family. Considered a subtropical tree, limes grow in warmer areas of the world where temperatures seldom drop below freezing. The life cycle of the lime tree begins with a single seed. Lime seeds are oval, light tan in colour and between five and six mm in diameter.
The lime tree is part of the citrus family. Considered a subtropical tree, limes grow in warmer areas of the world where temperatures seldom drop below freezing. The life cycle of the lime tree begins with a single seed.
Germination and Growth
Lime seeds are oval, light tan in colour and between five and six mm in diameter. The seed contains the genetic material in an embryo that, when conditions are right, will germinate and begin to grow. Initially the lime seed, once sprouted, uses the stored starches within the seed to grow. Later the seedling forms a root system that begins to draw water and nutrients from the soil. Leaves form and the seedling carries on photosynthesis to produce energy that allows the plant to continue to grow.
After three to five years, the lime tree reaches maturity. Lime trees are typically small plants, seldom reaching over 20 feet tall with a relatively compact canopy and rounded form. Once mature, the tree then flowers to reproduce, which is the mark of an angiosperm.
To reproduce, lime trees grow clusters of small, white, waxy flowers that are very fragrant. Each flower contains both the male and female organs of the plant. The anther, the male part of the flower, produces a fine dust-like substance called pollen that contains sperm. The female part of the plant is called the pistil and consists of the stigma, style and ovary.
The pollen produced by the anthers is transferred to the stigma, usually by an insect pollinator, such as a honey bee. Once the pollen contacts the stigma at the end of the pistil, the sperm enter the stigma, travel down the style and into the ovary where they fertilise the ovules inside.
Once fertilised, the ovules grow into seeds. The ovary of the flower forms a seed pod. In the case of the lime tree, this is a slightly oval fruit with divided sections filled with tiny cells that contain a sour juice. These sections protect the seeds and are surrounded by a thin, shiny, green rind.
When the fruit ripens, it is either eaten by humans or animals that inadvertently drop the seeds on the ground, or the fruit simply falls to the ground and rots, releasing the seeds. Once in contact with the soil, the seeds germinate and sprout, repeating the life cycle.
- Jupiterimages/Photos.com/Getty Images |
A tiny, cold rock, Pluto seems almost incapable of having an atmosphere. Between its small size and its distant location, the dwarf planet seems unlikely to have what it takes. Yet when NASA's New ...
Pluto's atmosphere is roughly 90% Nitrogen, and 10% other complex molecules such as methane. The composition of Pluto's atmosphere is very similar to the Earth's which is also 80% Nitrogen. The complex molecules probably come from radiation, which forces new molecules to be created from the surface of Pluto.
What Is Pluto's Atmosphere Made up Of? When Pluto has an atmosphere, it is thought to be made up of methane, nitrogen and carbon monoxide in gaseous form. This atmosphere outgases from the ice on Pluto's surface when it is closest to the sun, which warms the planet and causes the ice to sublimate.
The atmosphere of Pluto is the tenuous layer of gases surrounding Pluto.It consists mainly of nitrogen (N 2), with minor amounts of methane (CH 4) and carbon monoxide (CO), all of which are vaporized from their ices on Pluto's surface. It contains layered haze, probably consisting of heavier compounds which form from these gases due to high-energy radiation.
What is the atmosphere of Earth made of? Earth's atmosphere is 78% nitrogen, 21% oxygen, 0.9% argon, and 0.03% carbon dioxide with very small percentages of other elements. Our atmosphere also contains water vapor.
The thermosphere is the second-highest layer of Earth's atmosphere. It extends from the mesopause (which separates it from the mesosphere) at an altitude of about 80 km (50 mi; 260,000 ft) up to the thermopause at an altitude range of 500–1000 km (310–620 mi
All kinds of gases make up the atmosphere, but a few stand out in our day to day lives, such as Oxygen, Carbon Dioxide, and Ozone. You hear about Carbon Dioxide polluting the atmosphere on the news, in papers, in books, your parents, friends or teachers, but it isn't all bad.
The atmosphere is divided into five layers. It is thickest near the surface and thins out with height until it eventually merges with space. 1) The troposphere is the first layer above the surface and contains half of the Earth's atmosphere. Weather occurs in this layer. 2) Many jet aircrafts fly in ...
Yes, that’s right, Pluto does have an atmosphere. Well, the Pluto atmosphere is not the ocean of air we have here on Earth, but Pluto’s thin envelope of gases do surround the dwarf planet for ...
Mars is a planet that shows climate change on a large scale. Although its atmosphere used to be thick enough for water to run on the surface, today that water is either scarce or non-existent. The ... |
Thanks to our trustee Prof. Roger Downie, from The University of Glasgow for this months Croaking Science article.
January’s ‘Croaking science’ summarised Dawson et al.’s (2016) report on amphibian species held in zoos worldwide. They found only 6.2% of globally threatened amphibian species held in zoos in 2014, compared with 15.9% of birds, 23% of mammals and 38% of reptiles, despite the fact that overall, amphibians are the most threatened of these groups. Even the 6.2% figure may be optimistic, since the research did not assess the viability and sustainability of the populations held. The Amphibian Conservation Action Plan (ACAP) developed by IUCN’s Amphibian Specialist Group (Gascon et al. 2007) regarded ex situ captive breeding programmes for threatened species as a crucial part of the Plan, and the Amphibian Ark (AArk) was founded to co-ordinate and promote this activity (Pavajeau et al. 2008). However, given the modest number of threatened species with established ex situ breeding programmes almost a decade into the ACAP, we need to take a realistic look at the capacity of zoos to meet the need…and to examine the alternatives.
Most adult amphibians are small animals, compared, say, with charismatic mammals such as big cats, so their space needs are modest and therefore relatively cheap. However, in other ways, amphibians are problematic for zoos. Most species are nocturnal, so are inactive when visitors come by, not good for attracting people to visit the zoo, and therefore explaining the popularity in zoos of the colourful and day-active poison arrow frogs, irrespective of their conservation status. Another problem is their reproductive potential; many amphibian species produce hundreds or thousands of eggs at a time, so once a zoo has succeeded in encouraging a pair to breed, it may soon be faced with the need to house hundreds of young, first as tadpoles, later as juveniles and adults. Then there is food; most adult amphibians eat only live food, mainly insects, but for most threatened species, it is not known whether they have any special dietary needs, and it may take some time to discover that though individuals survive and grow on a diet of easily provided insects such as crickets, they do not thrive reproductively. Research shows that micronutrients such as carotenoids can be crucial to the development of reproductive success (Ogilvy et al. 2012) but this work has been done on few species so far. And then there is behaviour: there is a common misconception that amphibian behaviour is essentially instinctual, fixed and inflexible, with little learning involved. However, research has shown that at least some amphibians show complex learning, including the marking out and defence of territories and the ability to navigate home after complex journeys through forest (Pasukonis et al. 2015). This widespread misconception has led to the view that the behavioural needs of amphibians in captivity need little consideration, as distinct from birds and mammals where zookeepers have long accepted the need for what is known as behavioural and environmental enrichment if animals are to be kept in good psychological health. Enrichment involves the design of enclosures where animals are encouraged to lead the active and complex lives that they would have in nature. As Michaels et al.(2014) found, there has been remarkably little investigation so far of enrichment provision for amphibians; they reviewed a few reports of different ways to provide food, that encouraged active foraging, and limited research on the impact of different enclosure furnishings, but little else. Enrichment is regarded as important in birds and mammals both on welfare grounds and in the context of ex situ conservation which aims to release captive bred animals to the wild. This aim can only succeed if the animals are capable of surviving and reproducing in the wild and this requires the acquisition of learned survival skills. There is no reason to expect the successful captive breeding/release of amphibians to lack these requirements.
All in all, we lack knowledge of both the physiological and behavioural needs of many of the most threatened amphibian species. Zoos can be places where these needs can be researched, but more emphasis is needed on such investigations. Are there alternatives to ex situ conservation in the battle to avoid further amphibian extinctions? Look out for further ‘Croaking Science’ articles.
Dawson,J. et al.(2016) Assessing the global zoo response to the amphibian crisis through 20-year trends in captive collections. Conservation Biology (already published on-line)
Gascon, C. et al.(2007) Amphibian conservation action plan. IUCN/SSC amphibian specialist group. Gland, Switzerland
Michaels, C. et al.(2014) The importance of enrichment for advancing amphibian welfare and conservation goals; a review of a neglected topic. Amphibian and Reptile Conservation 8, 7-23
Ogilvy,V. et al.(2012) A brighter future frogs? The influence of carotenoids on the health, development and reproductive success of the red eye tree frog. Animal Conservation 15, 480-488
Pasukonis, A. et al.(2015) Poison frogs rely on experience to find the way home in the rainforest. Biology Letters 10, 20140642
Pavajeau, l. et al.(2008) Amphibian Ark and the Year of the Frog campaign. International Zoo Yearbook 42, 24-29 |
The history of automation reveals that when jobs are automated, more prosperity results.
Automation has become somewhat of a dirty word recently. Fear of mass unemployment and widening inequality has led to calls for greater regulations of technology companies, expanded programs of redistribution, and even a “robot tax” to discourage adoption of labor-saving technologies. These fears, however, focus too much on the short-term disruption that creative destruction brings, ignoring the long-term opportunities for human advancement that comes along with it.
Throughout history, technologies that automate labor have been crucial to emancipating people from grueling work and giving them more opportunity to pursue fulfilling careers. While inventions like the printing press reduced the demand for calligraphers, it simultaneously increased the opportunity to write and spread one’s message, leading to far greater social opportunity.
Thomas Davenport of MIT and Julia Kirby of Harvard University Press, who have worked on the implications of automation, argue that there have been three broad eras of automation starting with the Industrial Revolution, then the Computer Age, and now the age of Artificial Intelligence that we are living through.
The first of these automated tasks were those that were dirty and dangerous. As the industrial revolution urbanized the United States and Europe, people were able to move from uncertain and low-productivity agricultural work to stable and high-productivity factory employment. Innovations in agriculture enabled more food to be produced with less work, much of which was extremely strenuous.
Those looking for work went to urban factories where, as a result of steam engines, increased iron production, and power looms, textiles could be produced by unskilled laborers at fantastic speeds, as opposed to slowly by the hands of a skilled artisan. While this adaptation caused social unrest among the artisanal classes who lost their high status, it enabled people to afford luxuries that were once restricted and enter lines of work they could not previously have dreamed of.
As cities grew with the migration of people seeking quality work, ideas also bounced around. Edward Glaeser, a Harvard economist, notes that when cities double in size, productivity per capita goes up 15 percent. People who share more time with each other and work in close proximity share more ideas and are able to more easily turn them into reality. This is why the First Industrial Revolution led, after a brief pause of adaptation, into the second—or technological—revolution, which introduced railroads, petroleum products, mass-produced steel, electricity, and the car, among a whole host of other significant inventions. It’s hard to imagine this explosion in innovation being possible in 1790, when 90 percent of the labor force were farmers.
Automation of this kind, known as skill-unbiased, when new technologies make things easier to make or do than before, characterized the era of the Industrial Revolutions. They enabled greater opportunity for otherwise low-productivity workers to earn a living and enjoy leisure, which in turn spurred massive social movements. The high school movement, which wanted greater human capital investment at the turn of the 20th Century, is hard to imagine being brought about without the reduced need for demanding physical work.
The impacts of these new technologies were impossible to predict as they were introduced, but human ingenuity harnessed their creative potential. Take for example the automobile, the introduction of which into all spheres of life threatened the jobs of those who had made a living off the massive horse industry.
In 1890, there were 13,800 companies that built horse-drawn carriages. Combined with the industry surrounding raising of horses, maintenance, food, cleaning the streets of their urine and feces, and all the other associated tasks, the horse seemed vital to American industry. Henry Ford’s assembly line in 1913 did not, however, destroy the American economy by reducing the demand for horses. It instead enabled a sprawling automotive industry that eventually led to some of the world’s largest corporations that became central to American life. In addition, it enabled faster, lower-cost travel, which increased the growth of cities, the productivity of people, and enabled freer movement than before.
By the 1950s the second age of automation began, with machines taking away the dull tasks of life. Routine and clerical work began being reduced through the introduction of improved telecommunications networks, punch cards, and airline kiosks. Information technology advanced rapidly, leading to falling costs of computing and increased use of software to expedite rote work. When the Internet was born, the cost of information fell dramatically, reducing the time-consuming and laborious work of research. The implications of the “knowledge economy” that this birthed are so profound that we have yet to fully understand them.
Unlike the industrial revolutions, the computer age is characterized by skill-biased technologies. Instead of making once-difficult tasks easier for people to do, they make once-boring and repetitive tasks more knowledge-based. The introduction of the ATM took away the routine part of a teller’s job which involved counting money and updating books, and replaced them with the more cognitive tasks of understanding customer needs and being a salesman. This has greatly improved the returns for the capable while reducing opportunities for those that lack the newly in-demand skills.
The current age of automation, with artificial intelligence technologies that reduce the need for human prediction, are similarly skill-biased. By reducing the need to process information to make speedy conclusions, such as in translating speech to text in a foreign language, they increase the value of tasks that involve judgment and social skills. Fears over this skill-bias, however, neglect the means by which people have always harnessed technology.
Personal computers and smartphones are skill-biased technologies that have greatly improved the day-to-day lives of people by increasing the efficiency of the tasks that they do and increasing their interconnectedness. They have enabled people to become lifelong learners and increased the ability of people to adapt to changes in their social surroundings. Artificial intelligence is an extension of these innovations and magnifies the benefits that information technologies have brought. Transitioning to a world in which different skills are valued might be difficult, but in the long run, everybody benefits.
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system. |
A primary source in the sciences is usually a report on the results of an experiment by the person or group who performed it. They are usually published as scientific articles. Primary scientific articles contain high-level vocabulary and will usually present original data, often displayed in tables or charts.
The scientist reports the results of his or her own research. It is not a comment on someone else’s research, although the scientist may refer to someone else’s work in the body of the paper to illustrate the points he/she is trying to prove or disprove. Most scientific journals that are peer-reviewed are likely to contain primary literature. Peer-review means that a panel of experts will review all articles submitted for publication before they are accepted by the journal.
In a primary research article, you will typically see many or all of the following elements clearly presented:
The presence of these components indicate that the author is presenting new data and ideas.
Research articles ARE primary sources. They contain the original account and data collected from a specific experiment, study, etc. You can identify a research article by looking for some of the following:
Not to be confused with a “peer reviewed journal,” Review articles are an attempt by one or more writers to sum up the current state of the research on a particular topic. Ideally, the writer searches for everything relevant to the topic, and then sorts it all out into a coherent view of the “state of the art” as it now stands. Review Articles will teach you about::
Review Articles are virtual gold mines if you want to find out what the key articles are for a given topic. Unlike research articles, review articles are good places to get a basic idea about a topic. However, review articles are NOT primary sources. |
Integrating Mathematics, Science and Language: An Instructional Program is a two-volume curriculum and resources guide developed by Paso Partners – a partnership of three public schools, an institution of higher education, and SEDL specialists.
The resource is designed to help elementary school teachers organize their classrooms and instructional activities in order to increase achievement of Hispanic primary-grade children whose first language is not English. The guide offers a curriculum plan, instructional strategies and activities, suggested teacher and student materials and assessment procedures that focus on the acquisition of:
* higher-order thinking skills to apply newly learned knowledge and understanding;
* understanding of relations between mathematics and science concepts;
* knowledge, i.e., specific items of information and understanding of relevant concepts; and
* language to gain and communicate knowledge and understanding.
Although written for children whose first language is Spanish, the lesson plans are useful for any teacher. Check them out here.
Yes FREE!!! Woo hoo! Need I say it? :) |
Environment plays a big role in our survival on earth. Trees, animals, plants, and vegetation are some attributes that composed of our habitat. Living organisms maintain the cycle to produce a healthy ecosystem. It is necessary to open awareness to climate change mitigation and adaptation.
Climate change is a shift in weather patterns and associated transformations in seas, land layers and icy sheets that arise over time frames of centuries or longer. It is a worldwide issue which has no boundaries and this also needs organized effort by all nations to fight it. The source of weather transition is global warming, with several adverse implications for natural, genetic and social systems, as well as many other impacts.
Global warming is triggered by the greenhouse effect, a natural phenomenon by which the environment maintains the right amount of heat of the sun. Water vapor is the most significant greenhouse gas in terms of all its input to heating. Quantities demonstrate little transition and remain in the air in only a few days. Enabling the habitat to retain the circumstances needed for hosting life.
The issue is that people practices increase the greenhouse effect, triggering the planet temperature to rise much more. Planet temperature increases have catastrophic implications, putting the preservation of life on earth, including humans, at risk. Bad effects of climate transformation involve the shrinking of Arctic sea ice at poles, which in exchange creates increasing ocean levels, causing floods and damaging maritime areas, with remote island nations at danger of vanishing.
The atmosphere of the earth has been evolving over time. It also expands the occurrence of more destructive weather events, floods, and earthquakes. The destruction of plants and animals populations, pollution from rivers and streams is widespread. The development of mass migrations and the loss of food supply chain and financial assets, particularly in growing nations.
Researchers were concerned that natural temperature increase or variation is dominated by fast mankind induced climatic shift that has severe consequences for the stabilization of earth climate. There are different degrees of unpredictability as to the level of perspective effects. But modifications may lead to water scarcity, bring about a radical transformation in food manufacturing circumstances. Boost the amount of fatalities from floods, hurricanes, heatwaves, and heatwaves.
This is because of anticipated to boost the incidence of severe weather occurrences although connecting any single incident to warming is complex. Deprived nations that are less empowered to cope with fast conversion would struggle the great majority. The climate is expected to persist to alter over the next decade and beyond.
The scale of climatic shifts for the next few years relies mainly on the quantity of heat absorbing gasses emissions and the vulnerability of the earth. Responding to the issue involves two approaches. First, are the lowering emissions and stable concentrations of greenhouse gas heat capture in atmospheric mitigation.
Able to adapt to global changes is now progressing. It is crucial to make it obvious that and can never be ignored. We can mitigate its impacts and adapt from its implications, we can combat it by implementing interventions which assist to slow it down. |
Facts about Red Shift discuss the useful topic in physics. It takes place due to the increase of wavelength on an object due to the presence of the electromagnetic radiation of light. Therefore, it led into the shifting to the spectrum of red. When it gets redder, it means that the wavelength is improved. This condition equals with the lower photon energy and lower frequency. Let us get other interesting facts about red shift below:
Facts about Red Shift 1: the occurrence of red shift
The occurrence of red shift is characterized by the movement of light source away from the observer. The cosmological red shift is considered as the prominent example.
Facts about Red Shift 2: the gravitational red shift
The gravitational redshift is spotted when the electromagnetic radiation moved out of the gravitational field.
Facts about Red Shift 3: the contrasting phenomenon
The opposite phenomenon of redshift is called blueshift. The latter one occurs when the object which emits light moves toward the observer.
See Also: (10 Facts about Radio Waves)
Facts about Red Shift 4: the common term
People often use the term redshift. That is why the negative redshift is used to define the blueshift phenomenon.
Facts about Red Shift 5: the application of redshift and blueshift
The blueshift and redshift are very important in the development of technology for both have been used to developed Doppler radar guns and radar.
Facts about Red Shift 6: the value of red shift
In physics, the value of red shift is defined using the letter z. The calculation of a red shift located on a nearby object can be conducted by using the special relativistic red shift formula. In other cases, the general relativity red shift formula is used to calculate the red shift on the Big Bang and black holes.
Facts about Red Shift 7: the types of red shifts
There are three major types of red shift. They are the cosmological, gravitational and relativistic red shift. All of them are included in the frame transformation laws.
Facts about Red Shift 8: the physical processes
The optical and scattering effects are the examples of physical processes, which may turn the red shift into electromagnetic radiation.
Check Also: (10 Facts about Rain Gauge)
Facts about Red Shift 9: the brief history of red shift
The brief history of red shift can be traced back during the development of wave mechanics in the nineteenth century.
Facts about Red Shift 10: the Doppler Effect
The red shift is always linked with the development of Doppler Effect. In 1842, the physical effect was explained by Christian Doppler.
Are you impressed after reading facts about red shift? |
This week on We are Living in the Future:
Scientists in Japan set a new world record for number of clones from a single cell, having cloned 581 full mice. If their techniques can be used in other places, science may have the tools to make perfect genetic tests – a single genome for testing that reveals only natural variables. The biggest setback to extended cloning in the past appears to have been some genetic abnormalities in the starter cell, which are magnified when cloned. With a few genetic tweaks, the mice were cloned in 25 rounds, and each mouse was healthy, and also able to reproduce normally.
Today’s touch screens treat each touch as the same, but detect differences in current flow based on individual differences. That capability may be exploited in future touchscreen devices so that the screen itself can differentiate between users, potentially reducing the need for password protection. Considered a biometric, this “capacitive fingerprinting” could be applied beyond tablets to other things that could use identification, such as doorknobs and furniture, though these uses are still theoretical.
Electricity flows along wires like cars along roadways – but unlike roadways, we can paint conductive material onto other surfaces. This hasn’t led to much efficiency, until recently – when scientists at the University of Michigan developed a technique that aligns semiconducting polymers using a liquid that is brushed onto a surface. The brushing allows the polymers to follow the brush stroke, creating a small network of material across the surface. Using this technique, they were able to make a simple transistor, showing that such polymers may be a replacement for silicon, which is expensive and requires high temps and energy expenditure to work. Because the polymer is a liquid at room temps, it may also be possible to print using ink-jet techniques, improving the speed and accuracy of electronics prototyping. |
JOHN BEGNAUD: Plants' leaves can play important role in landscape decisions
Many of us are still raking up leaves in preparation of spring. The importance of leaves is often overlooked as we spend time disposing of them and sometimes cussing them. But they play an integral part in plant survival and can be very useful in developing attractive landscapes.
We seldom choose plants in landscapes by the form of their leaf, but when we do it can be striking. Some plants have broadened, wide leaves, some may have small leaves, and some may have long, grassy-type linear leaves. The human eye perceives the size and shape of plant leaves in different ways.
Leaves have evolved over many years to perform the function of capturing sunlight and manufacturing food for plants' survival. Leaves help plants adapt to their natural environment. We often relocate plants to a landscape that may be far from where they grew wild. A few general leaf characteristics can be associated with water-conserving plants and can be helpful in plant selection on the edge of the desert.
Waxy-surfaced leaves are usually more tolerant to dry conditions. The outer cuticle layer can reduce the loss of water through transpiration. This is true for plants such as ligustrum, privet and old fashioned euonymus, which we can find around old, untended homesteads.
Large leaves usually have more openings than small leaves. Therefore there is more opportunity to lose water through transpiration. Large leaf surfaces are harder to maintain surface temperature, and summertime wilting or scorch can be the result. We see this when the Eastern redbud is chosen over the Texas or Mexican redbud, which have smaller leaves with a waxy cuticle.
The combination of small, waxy leaves can give a plant a great ability to survive in our climate on low water once established. The best example is one of our most popular small trees, the tree yaupon. This East Texas native can be found in landscapes all over West Texas. Selected for its evergreen leaves and red winter berries, this drought-hardy specimen is particularly attractive now while most parts of the landscape are brown.
Leaves with a hairy surface are useful in deflecting sunlight and thereby making the leaf surface cooler than a full exposed smooth leaf. This is one of the reasons our Texas sage can survive with little or no irrigation.
When ornamental grasses are allowed to grow to their mature height they are very good at surviving on little water. Their bunched-together linear leaves are great at shading and slowing wind movement, which reduces leaf water loss.
The contrast in leaf shapes and sizes when used together in landscapes can be very pleasing to the eye and becomes more natural looking. Making sure that like specimens of similar leaves are used in groupings is also helpful in creating a good look.
John Begnaud is a retired Tom Green County Extension agent for horticulture. Contact him at [email protected]. |
Gulfs—Can Evolution Bridge Them?
FOSSILS give tangible evidence of the varieties of life that existed long before man’s arrival. But they have not produced the expected backing for the evolutionary view of how life began or how new kinds got started thereafter. Commenting on the lack of transitional fossils to bridge the biological gaps, Francis Hitching observes: "The curious thing is that there is a consistency about the fossil gaps: the fossils go missing in all the important places."1
2 The important places he refers to are the gaps between the major divisions of animal life. An example of this is that fish are thought to have evolved from the invertebrates, creatures without a backbone. "Fish jump into the fossil record," Hitching says, "seemingly from nowhere: mysteriously, suddenly, full formed."2 Zoologist N. J. Berrill comments on his own evolutionary explanation of how the fish arrived, by saying: "In a sense this account is science fiction."3
3 Evolutionary theory presumes that fish became amphibians, some amphibians became reptiles, from the reptiles came both mammals and birds, and eventually some mammals became men. The previous chapter has shown that the fossil record does not support these claims. This chapter will concentrate on the magnitude of the assumed transitional steps. As you read on, consider the likelihood of such changes happening spontaneously by undirected chance.
The Gulf Between Fish and Amphibian
4 It was the backbone that distinguished the fish from the invertebrates. This backbone would have had to undergo major modifications for the fish to become amphibian, that is, a creature that could live both in the water and on land. A pelvis had to be added, but no fossil fish are known that show how the pelvis of amphibians developed. In some amphibians, such as frogs and toads, the entire backbone would have had to change beyond recognition. Also, skull bones are different. In addition, in the forming of amphibians, evolution requires fish fins to become jointed limbs with wrists and toes, accompanied by major alterations in muscles and nerves. Gills must change to lungs. In fish, blood is pumped by a two-chambered heart, but in amphibians by a three-chambered heart.
5 To bridge the gap between fish and amphibian, the sense of hearing would have had to undergo a radical change. In general, fish receive sound through their bodies, but most toads and frogs have eardrums. Tongues would also have to change. No fish has an extendable tongue, but amphibians such as toads do. Amphibian eyes have the added ability to blink, since they have a membrane they pass over their eyeballs, keeping them clean.
6 Strenuous efforts have been made to link the amphibians to some fish ancestor, but without success. The lungfish had been a favorite candidate, since, in addition to gills, it has a swim bladder, which can be used for breathing when it is temporarily out of the water. Says the book The Fishes: "It is tempting to think they might have some direct connection with the amphibians which led to the land-living vertebrates. But they do not; they are a separate group entirely."4 David Attenborough disqualifies both the lungfish and the coelacanth "because the bones of their skulls are so different from those of the first fossil amphibians that the one cannot be derived from the other."5
Gulf Between Amphibian and Reptile
7 Trying to bridge the gap between amphibian and reptile poses other serious problems. A most difficult one is the origin of the shelled egg. Creatures prior to reptiles laid their soft, jellylike eggs in water, where the eggs were fertilized externally. Reptiles are land based and lay their eggs on land, but the developing embryos inside them must still be in a watery environment. The shelled egg was the answer. But it also required a major change in the process of fertilization: It called for internal fertilization, before the egg is surrounded by a shell. To accomplish this involved new sexual organs, new mating procedures and new instincts—all of which constitute a vast gulf between amphibian and reptile.
8 Enclosing the egg in a shell made necessary further remarkable changes in order to make possible the development of a reptile and, finally, its release from the shell. For example, within the shell there is the need for various membranes and sacs, such as the amnion. This holds in the fluid in which the embryo grows. The Reptiles describes another membrane called the allantois: "The allantois receives and stores embryonic waste, serving as a sort of bladder. It also has blood vessels that pick up oxygen that passes through the shell and conduct it to the embryo."6
9 Evolution has not accounted for other complex differences involved. Embryos in fish and amphibian eggs release their wastes in the surrounding water as soluble urea. But urea within the shelled eggs of reptiles would kill the embryos. So, in the shelled egg a major chemical change is made: The wastes, insoluble uric acid, are stored within the allantois membrane. Consider this also: The egg yolk is food for the growing reptile embryo, enabling it to develop fully before emerging from the shell—unlike amphibians, which do not hatch in the adult form. And to get out of the shell, the embryo is distinctive in having an egg tooth, to help it break out of its prison.
10 Much more is needed to bridge the gap between amphibian and reptile, but these examples show that undirected chance just cannot account for all the many complex changes required to bridge that wide gulf. No wonder evolutionist Archie Carr lamented: "One of the frustrating features of the fossil record of vertebrate history is that it shows so little about the evolution of reptiles during their earliest days, when the shelled egg was developing."
Gulf Between Reptile and Bird
11 Reptiles are cold-blooded animals, meaning that their internal temperature will either increase or decrease depending upon the outside temperature. Birds, on the other hand, are warm-blooded; their bodies maintain a relatively constant internal temperature regardless of the temperature outside. To solve the puzzle of how warm-blooded birds came from cold-blooded reptiles, some evolutionists now say that some of the dinosaurs (which were reptiles) were warm-blooded. But the general view is still as Robert Jastrow observes: "Dinosaurs, like all reptiles, were cold-blooded animals."8
12 Lecomte du Noüy, the French evolutionist, said concerning the belief that warm-blooded birds came from cold-blooded reptiles: "This stands out today as one of the greatest puzzles of evolution." He also made the admission that birds have "all the unsatisfactory characteristics of absolute creation"9—unsatisfactory, that is, to the theory of evolution.
13 While it is true that both reptiles and birds lay eggs, only birds must incubate theirs. They are designed for it. Many birds have a brood spot on their breast, an area that does not have any feathers and that contains a network of blood vessels, to give warmth for the eggs. Some birds have no brood patch but they pull out the feathers from their breast. Also, for birds to incubate the eggs would require evolution to provide them with new instincts—for building the nest, for hatching the eggs and for feeding the young—very selfless, altruistic, considerate behaviors involving skill, hard work and deliberate exposure to danger. All of this represents a wide gap between reptiles and birds. But there is much more.
14 Feathers are unique to birds. Supposedly, reptilian scales just happened to become these amazing structures. Out from the shaft of a feather are rows of barbs. Each barb has many barbules, and each barbule has hundreds of barbicels and hooklets. After a microscopic examination of one pigeon feather, it was revealed that it had "several hundred thousand barbules and millions of barbicels and hooklets." These hooks hold all the parts of a feather together to make flat surfaces or vanes. Nothing excels the feather as an airfoil, and few substances equal it as an insulator. A bird the size of a swan has some 25,000 feathers.
15 If the barbs of these feathers become separated, they are combed with the beak. The beak applies pressure as the barbs pass through it, and the hooks on the barbules link together like the teeth of a zipper. Most birds have an oil gland at the base of the tail from which they take oil to condition each feather. Some birds have no oil gland but instead have special feathers that fray at their tips to produce a fine talclike dust for conditioning their feathers. And feathers usually are renewed by molting once a year.
16 Knowing all of this about the feather, consider this rather astonishing effort to explain its development: "How did this structural marvel evolve? It takes no great stretch of imagination to envisage a feather as a modified scale, basically like that of a reptile—a longish scale loosely attached, whose outer edges frayed and spread out until it evolved into the highly complex structure that it is today."11 But do you think such an explanation is truly scientific? Or does it read more like science fiction?
17 Consider further the design of the bird for flight. The bird’s bones are thin and hollow, unlike the reptile’s solid ones. Yet strength is required for flight, so inside the bird’s bones there are struts, like the braces inside of airplane wings. This design of the bones serves another purpose: It helps to explain another exclusive marvel of birds—their respiratory system.
18 Muscular wings beating for hours or even days in flight generate much heat, yet, without sweat glands for cooling, the bird copes with the problem—it has an air-cooled "engine." A system of air sacs reach into almost every important part of the body, even into the hollow bones, and body heat is relieved by this internal circulation of air. Also, because of these air sacs, birds extract oxygen from air much more efficiently than any other vertebrate. How is this done?
19 In reptiles and mammals, the lungs take in and give out air, like bellows that alternately fill and empty. But in birds there is a constant flow of fresh air going through the lungs, during both inhaling and exhaling. Simply put, the system works like this: When the bird inhales, the air goes to certain air sacs; these serve as bellows to push the air into the lungs. From the lungs the air goes into other air sacs, and these eventually expel it. This means that there is a stream of fresh air constantly going through the lungs in one direction, much like water flowing through a sponge. The blood in the capillaries of the lungs is flowing in the opposite direction. It is this countercurrent between air and blood that makes the bird’s respiratory system exceptional. Because of it, birds can breathe the thin air of high altitudes, flying at over 20,000 feet for days on end as they migrate thousands of miles.
20 Other features widen the gulf between bird and reptile. Eyesight is one. From eagles to warblers, there are eyes like telescopes and eyes like magnifying glasses. Birds have more sensory cells in their eyes than have any other living things. Also, the feet of birds are different. When they come down to roost, tendons automatically lock their toes around the branch. And they have only four toes instead of the reptile’s five. Additionally, they have no vocal cords, but they have a syrinx out of which come melodious songs like those of the nightingales and mockingbirds. Consider too, that reptiles have a three-chambered heart; a bird’s heart has four chambers. Beaks also set birds apart from reptiles: beaks that serve as nutcrackers, beaks that filter food from muddy water, beaks that hammer out holes in trees, crossbill beaks that open up pinecones—the variety seems endless. And yet the beak, with such specialized design, is said to have evolved by chance from the nose of a reptile! Does such an explanation seem credible to you?
21 At one time evolutionists believed that Archaeopteryx, meaning "ancient wing" or "ancient bird," was a link between reptile and bird. But now, many do not. Its fossilized remains reveal perfectly formed feathers on aerodynamically designed wings capable of flight. Its wing and leg bones were thin and hollow. Its supposed reptilian features are found in birds today. And it does not predate birds, because fossils of other birds have been found in rocks of the same period as Archaeopteryx.12
Gulf Between Reptile and Mammal
22 Major differences leave a wide gulf between reptiles and mammals. The very name "mammal" points up one big difference: the existence of mammary glands that give milk for the young, which are born alive. Theodosius Dobzhansky suggested that these milk glands "may be modified sweat glands."13 But reptiles do not even have sweat glands. Moreover, sweat glands give off waste products, not food. And unlike baby reptiles, the mammalian young have both the instincts and the muscles to suck the milk from their mother.
23 Mammals have other features, also, that are not found in reptiles. Mammalian mothers have highly complex placentas for the nourishment and development of their unborn young. Reptiles do not. There is no diaphragm in reptiles, but mammals have a diaphragm that separates the thorax from the abdomen. The organ of Corti in the ears of mammals is not found in reptilian ears. This tiny complex organ has 20,000 rods and 30,000 nerve endings. Mammals maintain a constant body temperature, whereas reptiles do not.
24 Mammals also have three bones in their ears, while reptiles have only one. Where did the two "extras" come from? Evolutionary theory attempts to explain it as follows: Reptiles have at least four bones in the lower jaw, whereas mammals have only one; so, when reptiles became mammals there was supposedly a reshuffling of bones; some from the reptile’s lower jaw moved to the mammal’s middle ear to make the three bones there and, in the process, left only one for the mammal’s lower jaw. However, the problem with this line of reasoning is that there is no fossil evidence whatsoever to support it. It is merely wishful conjecture.
25 Another problem involving bones: Reptilian legs are anchored at the side of the body so that the belly is on or very near the ground. But in mammals the legs are under the body and raise it off the ground. Regarding this difference, Dobzhansky commented: "This change, minor though it may seem, has necessitated widespread alterations of the skeleton and the musculature." He then acknowledged another major difference between reptiles and mammals: "Mammals have greatly elaborated their teeth. Instead of the simple peg-like teeth of the reptile, there is a great variety of mammalian teeth adapted for nipping, grasping, piercing, cutting, pounding, or grinding food."14
26 One last item: When the amphibian supposedly evolved into a reptile, the wastes eliminated were noted to have changed from urea to uric acid. But when the reptile became a mammal there was a reversal. Mammals went back to the amphibian way, eliminating wastes as urea. In effect, evolution went backward—something that theoretically it is not supposed to do.
The Greatest Gulf of All
27 Physically, man fits the general definition of a mammal. However, one evolutionist stated: "No more tragic mistake could be made than to consider man ‘merely an animal.’ Man is unique; he differs from all other animals in many properties, such as speech, tradition, culture, and an enormously extended period of growth and parental care."15
28 What sets man apart from all other creatures on earth is his brain. The information stored in some 100 billion neurons of the human brain would fill about 20 million volumes! The power of abstract thought and of speech sets man far apart from any animal, and the ability to record accumulating knowledge is one of man’s most remarkable characteristics. Use of this knowledge has enabled him to surpass all other living kinds on earth—even to the point of going to the moon and back. Truly, as one scientist said, man’s brain "is different and immeasurably more complicated than anything else in the known universe."16
29 Another feature that makes the gulf between man and animal the greatest one of all is man’s moral and spiritual values, which stem from such qualities as love, justice, wisdom, power, mercy. This is alluded to in Genesis when it says that man is made ‘in the image and likeness of God.’ And it is the gulf between man and animal that is the greatest chasm of all.—Genesis 1:26.
30 Thus, vast differences exist between the major divisions of life. Many new structures, programmed instincts and qualities separate them. Is it reasonable to think they could have originated by means of undirected chance happenings? As we have seen, the fossil evidence does not support that view. No fossils can be found to bridge the gaps. As Hoyle and Wickramasinghe say: "Intermediate forms are missing from the fossil record. Now we see why, essentially because there were no intermediate forms."17 For those whose ears are open to hear, the fossil record is saying: "Special creation." |
The main reason is the same reason we often prefer to use integer fractions instead of fixed-precision decimals. With rational fractions, (1/3) times 3 is always 1. (1/3) plus (2/3) is always 1. (1/3) times 2 is (2/3).
Why? Because integer fractions are exact, just like integers are exact.
But with fixed-precision real numbers -- it's not so pretty. If (1/3) is
.33333, then 3 times (1/3) will not be 1. And if (2/3) is
.66666, then (1/3)+(2/3) will not be one. But if (2/3) is
.66667, then (1/3) times 2 will not be (2/3) and 1 minus (1/3) will not be (2/3).
And, of course, you can't fix this by using more places. No number of decimal digits will allow you to represent (1/3) exactly.
Floating point is a fixed-precision real format, much like my fixed-precision decimals above. It doesn't always follow the naive rules you might expect. See the classic paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.
To answer your question, to a first approximation, you should use integers whenever you possibly can and use floating point numbers only when you have to. And you should always remember that floating point numbers have limited precision and comparing two floating point numbers to see if they are equal can give results you might not expect. |
EUREKA project E! 3424 RECAN has developed a range of unique and highly specific monoclonal and polyclonal antibodies the proteins produced in the blood which counteract bacteria, viruses or cancerous cells. This was achieved by first producing a number of recombinant proteins which are important components of cellular signalling pathways. These proteins themselves have direct uses in immunisation and experimental studies. A further key advance is the incorporation of novel fluorochrome dyes with specific monoclonal antibodies, which can then be used in diagnosis of leukaemia and rheumatic diseases; also in oncology and haematology research.
The human immune system protects the body from disease by identifying and destroying the agents of disease bacteria, viruses and also its own cells if they become transformed into a potentially cancerous tumour. The immune system depends on the activity of antibodies, which are naturally produced within its white blood cells. The structure of antibodies has many millions of variations; each capable of recognising and marking a specific antigen, for example from a specific bacterium, so that the bacterium of that strain can be identified and destroyed by other types of white blood cells. If a molecule from a specific bacterium binds to a receptor protein on the surface of the white blood cell, the protein, which is an important component of the signalling pathway, triggers a response within the cell. In addition, antibodies can serve as an extremely useful research and diagnostic tool, as they can bind with great specificity and sensitivity to their target structures and then can be visualised by staining with specific dyes.
The Antibodies fabric
Modern molecular techniques now enable in vitro production of some of the receptor proteins. The RECAN project used recombinant techniques to produce them combining defined DNA sequences with the DNA of bacteria to alter the coding for specific traits, and then harvesting the altered protein derived from that recombinant DNA.
The recombinant protein was then used to immunise test subjects using standard hybridoma technology. This involves fusion of specific antibody-producing cells with cancer cells to form hybrid cell lines, growing them in tissue culture, and retaining and purifying the antibodies produced. In this project, a first test group was used to produce antibodies derived from monoclonal antibodies, and another to produce antibodies derived from polyclonal antibodies.
"We used a standard technology" says Professor Vaclav Horej of the Institute of Molecular Genetics in Prague (IMG). "But what's unique about our project is the products especially the monoclonal antibodies, which are of unique specificity, with great commercial potential and in some cases also useful for diagnostics."
The RECAN project has also made important advances in cytofluorometry, which is the use of specific fluorescent markers to distinguish between types of cells. The project partners developed methods to prepare monoclonal antibodies bound to several types of a new range of fluorochrome dyes. Antibodies bound to these dyes can readily be distinguished from those labelled with more conventional dyes.
One particular focus of this part of the project was to develop methodology for the immunophenotyping of leukaemia, and Exbio aims to be one of the first European companies to offer use of these novel fluorochrome dyes to screen leukaemia patients and those with rheumatoid arthritis. One of the monoclonal antibodies recognizes an important signalling protein called ZAP70, which is a characteristic marker of certain types of leukaemia and therefore can be used for diagnostic purposes. Although this method is already established, new monoclonal antibodies are still necessary to develop a standard protocol for routine diagnosis of this type of leukaemia, using the best reagents.
A ready market; a fruitful collaboration
The products generated as a result of the RECAN project are already commercially available worldwide through Exbio. The recombinant protein products can be used for immunisation in the production of antibodies, and also as specific internal standards for their production and determination. Production of the specific monoclonal and polyclonal antibodies makes it possible to target new antigens, which will contribute to the development of new immunochemical assays. Finally the fluorescently-labelled antibodies will find numerous applications in both diagnosis of conditions, and also in research studies in aspects of haematology, oncology, immunology and other areas of biomedical research.
The RECAN project made optimum use of existing techniques by applying them to a new area. In doing so, it produced a new range of products which make significant improvements over existing possibilities. Professor Horej comments that collaboration was important the project partners were in touch before RECAN began, but they were each able to contribute different skills. The IMG laboratory developed most of the recombinant proteins and used them for immunisation and the production of the hybridoma cell lines delivering the monoclonal antibodies. The Magdeburg immunology laboratory was responsible for independent testing of several of the monoclonal antibodies and evaluating their qualities in specialised immunochemical techniques. Exbio applied its unique expertise in fluorescent labelling of monoclonal antibodies. The company also prepared several batches of polyclonal antibodies and prepared them for commercialisation, including the optimisation of large scale production, purification and stabilisation.
|Contact: Piotr Pogorzelski| |
A language has support for first class functions if it is possible
to use a function as a regular value, i.e. if it is possible to pass
a function to another function, or return it from a function.
In a language with first class functions, it is therefore possible
to define the concept of higher order function is: a function which accepts
in input or returns in output (or both) another function.
Various imperative languages have support for higher order functions:
all the scripting languages, the latest version of C#, Scala,
and a few others. Still, functional languages have a better support
and higher order functions are used in those language much more
that in imperative languages. This is especially true for languages
such as ML and Haskell, which support curried functions out of the
box: in such languages all functions are really unary functions (i.e. they
accept a single argument) and functions of n arguments are actually
unary functions returning closures. In Scheme this behavior can be
emulated with macros. Here is an example of how one
could define curried functions in Scheme:
(sub (curried-lambda () b b* ...)
#'(begin b b* ...))
(sub (curried-lambda (x x* ...) b b* ...)
#'(lambda (x) (curried-lambda (x* ...) b b* ...)))
(def-syntax (define-curried (f x ...) b b* ...)
#'(define f (curried-lambda (x ...) b b* ...)))
define-curried defines a function with (apparently) n arguments
as an unary function returning a closure, i.e. a function with (apparently)
n-1 arguments which in turns is an unary function returning a closure
with n-2 arguments and so on, until it returns an unary function.
For instance, the following add function
(define-curried (add x y) (+ x y))
apparently has two arguments, but actually it is an unary function
returning an unary closure:
> (add 1)
> ((add 1) 2)
You can see how the macro works by using syntax-expand:
> (syntax-expand (curried-lambda (x y) (+ x y)))
(lambda (x) (curried-lambda (y) (+ x y)))
The internal curried-lambda has a single argument in this case
and thus expands to a regular lambda function, but you can see that
in general you will have a tower of nested lambdas, which dept is equal
to the number of arguments.
Whereas it is possible define curried functions in Scheme, usually
this is not very convenient, unless you are trying to
emulate ML or Haskell idioms. Out of the box, Scheme supports
functions with multiple arguments in a traditional fashion, i.e.
the same as in Python: thus, the most convenient construct is not currying,
but partial application. The Pythonistas here will certainly think
of functools.partial, an utility which was added to the standard
library starting from Python 2.5. Schemers have something similar
(but of course better) in the form of SRFI-26, i.e. the cut and cute
macros by Al Petrofsky.
Instead of spending too many words, let me show an example of how
partial function application works both in Python and in Scheme.
Here is the Python version:
>>> from functools import partial
>>> from operator import add
>>> add1 = partial(add, 1)
and here is the Scheme version:
> (import (srfi-26)); assuming it is available in your implementation
> (define add1 (cut + 1 <>))
> (add1 2)
In Python, partial(add,1) returns an unary callable object that adds 1
to its argument; in Scheme, (cut+1<>) returns an unary function
that does the same. The Scheme version is better, since the
arguments of the resulting functions are immediately vas visible as
slots (i.e. the <> symbol). For instance
has two slots and therefore is a function of two arguments:
> (greetings "Michele" "Mario")
"hello Michele and Mario"
It is also possible to define a variable number of arguments
by using the rest-slot symbol <...>:
> (define greetings (cut string-append "hello " <> " and " <...>))
> (display (greetings "Michele" "Mario" "\n"))
hello Michele and Mario
We can even use a slot for the function: for instance, the higher order
function apply could be implemented as (cut<><...>).
Moreover, there is a cute macro which acts exactly as cut, with a single
difference: the arguments in cute are evalued only once (the e stands
for evalued), whereas cut is not safe against multiple
evaluation. In particular, if you define
A couple of commonly used higher order functions in Scheme and
other functional languages are fold-left and fold-right.
They entered in the R6RS standard, but they are also available from SRFI-1,
therefore you can leverage on them even if you are using an R5RS Scheme.
fold-left and fold-right will remind Pythonistas of reduce,
which is also a folding function. However, it is well known that Guido
dislikes it and nowadays reduce is no more a builtin (in Python 3.0);
it is still available in the functools module, though.
For some reason (probabily the order of the arguments which I cannot
remember) I cannot use reduce in Python, whereas I have less
problems with fold-left e fold-right in Scheme and other
fold-left and fold-right have a
nearly identical API: both allow to traverse a list by accumulating
values and by returning at the end the final accumulator.
For instance, if you want to sum the values of list, here is
an idiomatic solution in Scheme (another one is
> (fold-left + 0 '(1 2 3)); sum all elements starting from 0; fold-right works too
In general, the function in fold-left takes N + 1 arguments, where
N is the number of lists you are looping over (usually N = 1)
and the leftmost argument is the accumulator. The same is true for
fold-right, but then the rightmost argument is the accumulator.
Notice that fold-left is quite different from fold-right, since they
work in opposite order:
In the first case fold-left loops from left to right
(the element 1 is the first to be consed, the element 2 is the second
to be consed, and the element 3 is the last to be consed, so that
the final result is (cons3(cons2(cons1'()))) i.e. (321))
whereas in the second case fold-right loops from right to left.
In order to give an example of use, here is how you could
define a flattening procedure by using fold-right:
(define (flatten lst)
(lambda (x a)
(if (list? x) (append (flatten x) a) (cons x a))) '() lst))
You can check that it works with a few tests:
(test "flatten null"
(test "flatten plain"
(flatten '(a b c))
'(a b c))
(test "flatten nested"
(flatten '((a b) (c (d e) f)))
'(a b c d e f))
Here is another example, a function to remove duplicates from a list:
Notice the use of cut to define an unary function (cuteq?<>el)
which checks if its argument is equal - according to the provided equality
function - to a given element el. exists is one of the
list processing utilities standardized by the R6RS document.
Here is a test:
Having first class functions in a language means much more than having
map, filter or fold. Perhaps in the future I will add another episode
about advanced usage of functions, such a parsing combinators or formatting
combinators; for the moment what I said here should be enough, and
the next episode will be devoted to another typical feature of functional
languages: pattern matching.
Cute! It's fun to see examples of similar functions/techniques in different languages. And thanks for the little lesson on Python's functools.partial. My Python skills are way behind the times.
> For some reason (probabily the order of the arguments which I cannot remember) I cannot use reduce in Python, whereas I have less problems with fold-left e fold-right in Scheme and other functional languages.
Now I'm curious. Can you say more about your reduce problems?
> For some reason (probabily the order of the arguments > which I cannot remember) I cannot use reduce in Python, > whereas I have less problems with fold-left e fold-right > in Scheme and other functional languages. > > Now I'm curious. Can you say more about your > reduce problems?
map and filter in Python have signature map(function, sequence, ...), the same as in Scheme. reduce instead has a different signature: reduce(binary_op, sequence, seed) I would expect (as done consistently in Scheme) a signature reduce(function, seed, sequence, ...) so I get confused all the time. Also, the seed is optional in Python. Notice that I learned Scheme after Python, so it is not a matter of previous familiarity with a given syntax, it is a matter of (in)consistency. |
|Listening and Responding to Others|
This summary is organized around the questions found at the beginning of the chapter. See if you can answer them before reading the summary paragraphs.
1. Why is listening essential for effective communication?
We spend as much as 53 percent of our total communication time listening, yet most of us are poor listeners. We are preoccupied, distracted, or forgetful as much as 73 percent of the time we are listening and we remember less than 25 percent of what we hear. By improving our listening skills, we strengthen the foundation for shared meaning in communication and increase satisfaction with our interpersonal relationships.
2. What are the stages of the listening process?
The four stages of the listening process are (1) attending, (2) interpreting, (3) responding, and (4) remembering. The listening process begins when we actively select, or attend to, stimuli in our environment. We assign meaning to the selected stimuli in the interpretation stage of listening. Responding to a message involves any discernable reaction to a message. We respond to messages verbally and nonverbally. Finally, the remembering stage involves the retention and recall of messages.
3. What are the differences between active and passive listeners?
Active listeners frequently remember more information than passive listeners. Active listeners focus on the moment, are aware of interactions as they occur, and resist distraction in the communication situation. Passive listeners, by contrast, expend little effort in the communication process, lack focus and awareness of the interaction, and are easily distracted.
4. What are some important obstacles to effective listening?
There are many obstacles to effective listening in every communication situation. Sometimes we encounter external obstacles such as poor acoustics or distracting environmental noises. Events that occur prior to or after the interaction can also present challenges. In addition, our attitude toward the communication situation can also influence our ability to listen. Low self-esteem, preconceived attitudes, personal or emotional investment in a speaker or topic, and indifference can all diminish our ability to listen effectively.
5. What are the four types of listening goals?
The four listening goals are (1) appreciation, (2) comprehension, (3) empathy, and (4) evaluation. When our goal is appreciation, we listen for pleasure and enjoyment; when we listen for comprehension, our goal is to understand the message. Empathetic listening involves not only understanding the message, but also recognizing and supporting the feelings and emotional states of others. Finally, evaluative listening helps us render an opinion or judgment about the message.
6. What can you do to become a more effective and responsible listener?
Listening is a skill that can be improved. Some of the ways to listen responsibly and effectively that were discussed in this chapter include preparing physically and mentally to listen, talking notes, being open-minded, using perception checks, actively providing feedback, demonstrating comprehension, staying involved throughout the interaction, and organizing material and information. |
Primary History: Britain since 1948 encourages pupils to examine the developments in post-war Britain and to consider how they have contributed to today’s society. Stimulating activities cover economic developments and industrialisation, recreational and religious choices, and Britain’s relations with other communities and countries.
• Choose from a range of activities to suit your class.
• Differentiate using a variety of writing-based tasks.
• Explore history topics through creative role-plays and art and design work.
• Ideal as accessible research resources for topic work.
No current reviews
No free resources available. |
Analysis: Writing Style
Even though the language in Julius Caesar is considered to be pretty straightforward, reading Caesar (or any one of Shakespeare's plays) can feel like reading a really long poem. That's because Shakespearean drama is written in a combination of verse (poetry) and prose (the way we talk normally).
We break all of this down in the paragraphs that follow, but here's what you should remember about Shakespeare's plays. The nobility and other important figures tend to speak in "blank verse," which is formal. The commoners, or "everyday Joes," tend to speak like we do, in regular old prose. (Note: The play Richard II is the one exception to this rule – it's the only Shakespeare play written entirely in verse. Even the gardeners speak poetry.)
OK, so now let's look at Julius Caesar specifically.
Blank Verse, or Unrhymed Iambic Pentameter (The Nobles)
In Julius Caesar, the noble Romans mostly speak in unrhymed "iambic pentameter," also called "blank verse." Don't let the fancy names intimidate you – it's pretty simple once you get the hang of it.
Let's start with a definition of iambic pentameter. An "iamb" is an unaccented syllable followed by an accented one (sounds like da DUM). "Penta" means "five," and "meter" refers to a regular rhythmic pattern. So "iambic pentameter" is a kind of rhythmic pattern that consist of five iambs per line. It's the most common rhythm in English poetry and sounds like five heartbeats:
da DUM da DUM da DUM da DUM da DUM.
Let's try it out on this line:
to CUT the HEAD off AND then HACK the LIMBS
Every second syllable is accented (stressed), so this is classic iambic pentameter. Since the lines have no regular rhyme scheme, we call it unrhymed iambic pentameter, or blank verse.
Prose (Commoners or "Plebeians")
Not everyone in the play speaks in verse. "Everyday Joes," as we've said, don't talk in a special rhythm – they just talk. Check out the Cobbler's smart-aleck response when a nobleman asks him about his profession:
[...] but withal I am indeed, sir, a surgeon to old
shoes; when they are in great danger, I recover them. (1.1.5)
Notice here that, even though the Cobbler doesn't speak in iambic pentameter, he's still a witty guy – he cracks a joke about what he does for a living. This kind of clever and silly banter reminds us of some of Shakespeare's "clown" figures, like the Dromio twins in The Comedy of Errors and Speed in Two Gentlemen of Verona. |
Are the fitter kids the smarter kids?
There is an increasing body of evidence indicating fitness and physical activity promotes better brain health and can increase academic performance in children.
Key points to note include:
- Regular physical activity can increase academic performance.
- Even single sessions of physical activity before a task can boost attention and memory.
- The intensity of activity is important to the level of benefit. Low intensity school PE classes may not be enough.
- Motor ability shows the strongest link to academic performance.
- Physical fitness from vigorous activity improves cognitive function in children. Executive function and brain health underlie academic ability.
1. Regular physical activity can increase academic performance
- In a 2012 review of 14 studies ranging from 50 to 12,000 students, it was concluded that there is a significant positive relationship between physical activity and academic performance. 1
- Joint research between the University of Canberra and the Australian National University in 2012 found that primary school students who keep physically active are more likely to have higher National Assessment Program – Literacy and Numeracy (NAPLAN) test scores. There was strong evidence of positive relationships at the school level between the literacy and numeracy scores and cardio-respiratory fitness. 2
- In a US study comparing 2 years of public school data of students’ academic results in Mathematics and English assessment tests (between grades 4 to 8) with corresponding results of physical fitness tests pasts during Physical Education classes (PE), it was found that the odds of passing both the Maths and English tests increased as the number of fitness tests passed increased. 3
- Evidence from a large-scale population study of 4755 students in 2013 confirmed the long-term positive impact of moderate-to-vigorous physical activity on academic achievement in adolescence.4
2. Even single sessions can boost attention and memory
- Findings from a study of 20 preadolescent children (average age 9.5 years) after a 20 minute exercise session on a treadmill showed an improvement in response accuracy and better performance on the academic achievement test than in the resting session beforehand. Results indicate that single, acute bouts of moderately-intense aerobic exercise (i.e. walking) may improve the cognitive control of attention in preadolescent children, and further support the use of moderate acute exercise as a contributing factor for increasing attention and academic performance.5
- In a study of grade 2 to grade 4 students on the effect of concentration from physical activity, it was determined that:
– for children in grades 2 to 3, a structured classroom activity or physical activity immediately before a concentration task was not detrimental.
– But for children in grade 4 after participating in physical activity, children performed significantly greater in a concentration task immediately following. 6
3. The intensity of activity is important to the level of benefit
- Children who participate in high to moderate intensity physical activity benefit the most. Physical education classes in schools do not always provide this level of activity and therefore benefits of more aerobic activity outside of school may provide better academic performance outcomes.
- In a study comparing 67 students in Spain taking the usual 2 PE classes per week (Group 1), versus, those taking 4 PE classes per week (Group 2) to another group taking 4 PE classes per week but at high intensity (Group 3), it was found that cognitive performance (non-verbal and verbal ability, abstract reasoning , spatial ability, and numerical ability) and academic achievement through school grades (eg. mathematics) increased more in Group 3 than Group 1. Overall Group 3 improved the most with no differences in the increase in Group 1 and Group 2. This study suggests that the intensity of physical activity has an increased effect on cognitive function and academic performance . 7
- In a US study of 214 grade 6 students it was found that academic achievement was not significantly related to physical education enrollment suggesting that “a threshold of activity intensity may be needed to bring about changes in the child that contribute to increased academic achievement.”“Improved academic performance was associated with vigorous activity obtained outside of school.” 8
4. Motor ability shows the strongest link to academic performance.
But exactly what component of physical fitness is directly to improved brain fitness?
- A study 9 of 2,038 Spanish children aged from 6 to 18 years tested the three areas of fitness:
a. Cardiorespiratory capacity (the measure of how well the heart and lungs can supply fuel and oxygen to the muscles during exercise) – through shuttle runs, also known as the” beep” test.
b. Motor ability (including speed of movement, agility and coordination) – this was also assessed through shuttle runs.
c. Muscular strength – through maximum handgrip and standing long jumps.
These were compared against end of year school grades in core subjects for academic performance.
- The results showed:
– The link between academic performance and physical fitness was strongest for motor ability suggesting that speed of movement, agility, and coordination may be more important for academic performance than aerobic fitness.
– Cardiorespiratory capacity was also linked to academic performance, but to a lesser extent.
– Muscular strength on its own was not linked to academic performance.
– The results also showed that children and adolescents who had both lower levels of cardiorespiratory capacity and motor ability had lower grades.
5. Physical fitness from vigorous activity improves cognitive function in children. Executive function and brain health underlie academic ability.
- In addition to the positive physical and mental health impact of physical activity, there is a strong belief that regular participation in physical activity is linked to enhancement of brain function and cognition 1011 ,thereby positively influencing academic performance.
- Aerobically fitter children perform better in tasks associated with cognitive function.12
- Evidence suggests that maths and reading are the main subjects influenced by physical activity as they depend on mental skills that help the brain plan, organise, prioritise, remember, pay attention, reason, problem solve and execute tasks with flexibility.
- A review of numerous studies examining the effects of exercise on children’s intelligence, cognition, or academic achievement concluded that physically fitter children perform cognitive tasks more rapidly, and that relatively short and specific aerobic exercise training interventions improve executive function, a form of mental processing involving strategically-based decision making. 13
- Children participating in physical activity are better able to stay focused and remain on task in the classroom, thus enhancing the learning experience.
What does this mean for your child?
- Even though school PE classes may provide some positive influence, the degree of benefit appears to be conditional upon the rigour of the activities during class.
- It has been shown that increased aerobic fitness provides a greater improvement in academic performance through brain development in children. Physical fitness may be gained from additional school sporting activities (interschool competitions for example) and sporting activities outside of school.
- Development of motor skills through sports encompassing speed of movement, agility, and coordination may have a greater influence on academic performance, relative to cardiovascular fitness. Muscular strength does not appear to have any correlation to academic performance.
- For those athletes in senior school, a well deserved break from study doing physical activity is not only great for some downtime but will also assist in improving concentration and memory.
- So does physical activity increase academic performance? Yes, but there is still a need to study! Even with improved brain functions such as attention and memory, you still need to use them to learn.
- Singh A et al. Physical Activity and Performance at School: A Systematic Review of the Literature Including a Methodological Quality Assessment. Arch Pediatr Adolesc Med.2012;166(1):49-55. ↩
- Telford et al 2012, ‘Schools With Fitter Children Achieve Better Literacy and Numeracy Results: Evidence of a School Cultural Effect’, Pediatric Exercise Science, vol. 24, viewed 10 December 2014,
- Chomitz et al (2009), Is There a Relationship Between Physical Fitness and Academic Achievement? Positive Results From Public School Children in the Northeastern United States. Journal of School Health, 79: 30–37. doi: 10.1111/j.1746-1561.2008.00371.x This study was supported in part through the US Department of Education Carol M White Physical Education Program grant Q215F041121 to the Cambridge Public School Department. ↩
- J N Booth et al Published Online First 22 October 2013 Associations between objectively measured physical activity and academic attainment in adolescents from a UK cohort Br J Sports Med 2014;48:265-270 doi:10.1136/bjsports-2013-092334 ↩
- Hillman CH et al. (2009) The effect of acute treadmill walking on cognitive control and academic achievement in preadolescent children. Neuroscience. 2009;159(3):1044. ↩
- Caterino MC, Polak ED. (1989) Effects of two types of activity on the performance of second-, third-, and fourth-grade students on a test of concentration Percept Mot Skills. Aug;89(1):245-8. ↩
- D. N. Ardoy, J. M. et al (2014) A Physical Education trial improves adolescents’ cognitive performance and academic achievement: the EDUFIT study Ortega Scandinavian Journal of Medicine & Science in Sports, February, 2014. 10.1111/sms.12093 ↩
- Coe, Dawn P., et al. (2006) “Effect of physical education and activity levels on academic achievement in children.” Medicine and science in sports and exercise 38.8 (2006): 1515. ↩
- Irene Esteban-Cornejo, MSc et al (2014) “Independent and combined influence of the components of physical fitness on academic performance in youth,” The Journal of Pediatrics, DOI: 10.1016/j.jpeds.2014.04.044, published by Elsevier. ↩
- Hillman CH et al (2008). Be smart, exercise your heart: exercise effects on brain and cognition. Nat Rev Neurosci. 2008;9(1):58-65 ↩
- Hillman CH et al (2011) A review of chronic and acute physical activity participation on neuroelectric measures of brain health and cognition during childhood. Prev Med. 2011 Jun;52 Suppl 1:S21-8. doi:10.1016/j.ypmed.2011.01.024. Epub 2011 Jan 31. ↩
- Davis, C.L. et al. (2007) Effects of aerobic exercise on overweight children’s cognitive functioning: a randomised controlled trial. Res. Q. Exerc. Sport. 78(5):510–519. ↩
- Tomporowski et al (2008) Exercise and Children’s Intelligence, Cognition, and Academic Achievement. Educ. Psychol. Rev. 20(2):111– 131. ↩ |
Other diodes: Diode types
The Schottky diode or Schottky Barrier diode is used in a variety of circuits.
Although it was ine of the first types of diode ever made, the Schottky diode is widely sued because it is able to provide a very low forward voltage drop.
As a result the Schottky barrier diode is used in a varity of applications from RF design to power rectification and many more.
Although the name used most widely for this type of diode is Schottky diode, it has also been given a number of other names that may be used from time to time. These names include surface barrier diode, Schottky barrier diode, hot carrier or even hot electron diode.
Schottky diode circuit symbol
The circuit symbol for the Schottky diode is based around the basic diode circuit symbol. The Schottky symbol is differentiated from other types of diode by the addition of the two extra legs on the bar on the symbol.
Schottky diode advantages
Schottky diodes are used in many places where other types of diode do not perform as well. They offer a number of advantages which can be utilised:
- Low turn on voltage: The turn on voltage for the diode is between 0.2 and 0.3 volts for a silicon Schottky diode whereas a standard silicon diode has a turn on voltage of between 0.6 to 0.7 volts. This reduces resistive losses when used as a power rectifier, and enables lower signals to be detected when used as an RF detector.
- Low junction capacitance: In view of the very small active area of the Schottky diode, the capacitance levels are very small.
- Fast recovery time: The fast recovery time because of the small amount of stored charge means that it can be used for high speed switching applications.
The advantages of the Schottky diode, mean that its performance can far exceed that of other diodes in many areas.
Schottky diode applications
The Schottky barrier diodes are widely used in the electronics industry finding many uses as diode rectifier. Its unique properties enable it to be used in a number of applications where other diodes would not be able to provide the same level of performance. In particular it is used in areas including:
- RF mixer and detector diode: The Schottky diode is a very useful component for radio frequency applications because of its high switching speed and high frequency capability. In view of this Schottky barrier diodes are used in many high performance diode ring mixers. In addition to this the low turn on voltage in addition to the low junction capacitance makes this type of diode ideal for use in RF detectors.
- Power rectifier: Schottky diodes are also used as high power rectifiers. Their high current density and low forward voltage drop mean that less power is wasted than if ordinary PN junction diodes were used. This increase in efficiency means that less heat has to be dissipated, and smaller heat sinks can be used, thereby saving weight and cost.
- Power OR circuits: Schottky diodes can be used in applications where a load is driven by two separate power supplies. One example may be a mains power supply and a battery supply. In these instances it is necessary that the power from one supply does not enter the other. This can be achieved using diodes. However it is important that any voltage drop across the diodes is minimised to ensure maximum efficiency. As in many other applications, this diode is ideal for this in view of its low forward voltage drop.
Schottky diodes tend to have a high reverse leakage current. This can lead to problems with any sensing circuits that may be in use. Leakage paths into high impedance circuits can give rise to false readings. This must therefore be accommodated in the circuit design.
- Solar cell applications: Solar cells are typically connected to rechargeable batteries, often lead acid batteries because power may be required 24 hours a day and the Sun is not always available. Solar cells do not like the reverse charge applied and therefore a diode is required in series with the solar cells. Any voltage drop will result in a reduction in efficiency and therefore a low voltage drop diode is needed. As in other applications, the low voltage drop of the Schottky diode is particularly useful, and as a result they are the favoured form of diode in this application.
- Clamp diode: Schottky barrier diodes may also be used as a clamp diode in a transistor circuit to speed the operation when used as a switch. Years ago they found widespread use in this application, forming a key element in the 74LS (low power Schottky) and 74S (Schottky) families of logic circuits. When used in this manner the Schottky diodes are inserted between the collector and base of the driver transistor to act as a clamp. To produce a low or logic "0" output the transistor is driven hard on, and in this situation the base collector junction in the diode is forward biased. When the Schottky diode is present this takes most of the current and allows the turn off time of the transistor to be greatly reduced, thereby improving the speed of the circuit.
The Schottky diode or Schottky barrier diode is used in many applications. It is unusual in that it is used for both very low power signal detection and also for high power rectification. The properties of the Schottky diode make it idea for use at both ends of the spectrum.
The Schottky diode is also used within a number of other devices from photodiodes to MESFETs. In this way, not only does this form of diode find uses in many circuits in its discrete format, but it is also an essential part of many other components and technologies as well. |
Also known as: Ebola virus infection and Viral hemorrhagic fever
- Backache (low-back pain)
- Sore throat
- Bleeding from eyes, ears, and nose
- Bleeding from the mouth and rectum (gastrointestinal bleeding)
- Eye swelling (conjunctivitis)
- Genital swelling (labia and scrotum)
- Increased feeling of pain in the skin
- Rash over the entire body that often contains blood (hemorrhagic)
- Roof of mouth looks red
Ebola hemorrhagic fever is a severe and often deadly illness that can occur in humans and primates (e.g. monkeys, gorillas).
Ebola hemorrhagic fever has made worldwide news because of its destructive potential.
Ebola hemorrhagic fever (Ebola fever) is caused by a virus belonging to the family called Filoviridae. Scientists have identified five types of Ebola virus. Four have been reported to cause disease in humans: Ebola-Zaire virus, Ebola-Sudan virus, Ebola-Ivory Coast virus, and Ebola-Bundibugyo. The human disease has so far been limited to parts of Africa.
The Reston type of Ebola virus has recently been found in the Philippines.
The disease can be passed to humans from infected animals and animal materials. Ebola can also be spread between humans by close contact with infected body fluids or through infected needles in the hospital.
During the incubation period, which can last about 1 week (rarely up to 2 weeks) after infection, symptoms include:
Late symptoms include:
There may be signs and symptoms of:
Exams and Tests
Tests used to diagnose Ebola fever include:
There is no known cure. Existing medicines that fight viruses (antivirals) do not work well against Ebola virus.
The patient is usually hospitalized and will most likely need intensive care. Supportive measures for shock include medications and fluids given through a vein.
Bleeding problems may require transfusions of platelets or fresh blood.
As many as 90% of patients die from the disease. Patients usually die from low blood pressure (shock) rather than from blood loss.
Survivors may have unusual problems, such as hair loss and sensory changes.
When to Contact a Medical Professional
Call your health care provider if you have traveled to Africa (or if you know you have been exposed to Ebola fever) and you develop symptoms of the disorder. Early diagnosis and treatment may improve the chances of survival.
Avoid areas in which there are epidemics. Wear a gown, gloves, and mask around sick patients. These precautions will greatly decrease the risk of transmission.
Bausch DG. Viral hemorrhagic fevers. In: Goldman L, Ausiello D, eds. Goldman's Cecil Medicine. 24th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 389.
Peters CJ. Marburg and ebola virus hemorrhagic fevers. In: Mandell GL, Bennett JE, Dolin R, eds. Principles and Practice of Infectious Diseases. 7th ed. Philadelphia, PA: Churchill Livingstone Elsevier; 2009:chap 164.
- Review date:
- January 9, 2013
- Reviewed by:
- Jatin M. Vyas, MD, PhD, Assistant Professor in Medicine, Harvard Medical School; Assistant in Medicine, Division of Infectious Disease, Department of Medicine, Massachusetts General Hospital. Also reviewed by David Zieve, MD, MHA, Bethanne Black, and the A.D.A.M. Editorial team.
The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. © 1997- 2008 A.D.A.M., Inc. Any duplication or distribution of the information contained herein is strictly prohibited. |
Aton Hymn, the most important surviving text relating to the singular worship of the Aton, a new religious ideology espoused by the ancient Egyptian king Akhenaton of the 18th dynasty. During his reign Akhenaton returned to the supremacy of the sun god, with the startling innovation that the Aton was to be the only god. To remove himself from the preeminent cult of Amon-Re at Thebes, Akhenaton built the city of Akhetaton (Tell el-Amarna) as the centre for the Aton’s worship.
The Aton Hymn, which was inscribed in several versions in the tombs of Akhetaton, describes the solar disk as the prime mover of life, whose daily rising rejuvenates all living things on earth and at whose setting all creatures go to sleep. Like some other hymns of its period, the text focuses on the world of nature and the god’s beneficent provision for it:
Men had slept like the dead; now they lift their arms in praise, birds fly, fish leap, plants bloom, and work begins. Aton creates the son in the mother’s womb, the seed in men, and has generated all life. He has distinguished the races, their natures, tongues, and skins, and fulfills the needs of all. Aton made the Nile in Egypt and rain, like a heavenly Nile, in foreign countries. He has a million forms according to the time of day and from where he is seen; yet he is always the same.
While the Aton is said to create the world for men, it seems that the ultimate goal of creation is really the king himself, whose intimate and privileged connection to his god is emphasized. Divine revelation and knowability is reserved for Akhenaton alone, and the hymn is ultimately neutral with regard to explicating the mysteries of divinity.
Certain passages of the Aton Hymn demonstrate themes shared by a wider literary tradition; portions have been compared in imagery to Psalm 104 (see Psalms). |
Melissa Savage, Granada Hills Charter High School
- Density of Gas
- Chemistry 4c. Students know how to apply the gas laws to relations between the pressure, tem
perature, and volume of any amount of an ideal gas or any mixture of ideal
- Physics 3c. Students know the internal energy of an object includes the energy of random
motion of the object’s atoms and molecules, often referred to as thermal energy.
The greater the temperature of the object, the greater the energy of motion of the
atoms and molecules that make up the object.
Remove the staple, label and string from the teabag.
Pour out the tea.
Unfold the teabag and stretch it out.
Use your finger to turn the teabag into a cylinder.
Stand the cylinder on one of its end on a plate on a flat surface.
Use a lighter or match to ignite the top of the tea bag cylinder.
Wait a few seconds.
Watch the rocket fly into the air!
Prior knowledge & experience:
Tea bags cant fly. Have seen hot air balloons.
What will happen to the Tea bag when you light it on fire?
Is it possible to make a tea bag float?
The flame created by burning the teabag heated the air inside the teabag cylinder. When the air was heated energy was transferred to individual pieces of air called air molecules. The air molecules moved around more quickly and spread out to take up more space. This means that the air molecules were further apart from each other and therefore the air was less dense. The warmer, less dense air rose above the cooler, denser air.
When the teabag burned, the teabag turned into ash and smoke. The smoke lifted away and all that was left was the ash. Ash is light, so it doesn’t require much force to lift it. The rising of the less dense (heated) air inside the teabag had enough force to lift the ash of the teabag.
All gases have the same mass and density
Connections to Real World
Hot air balloons work in a similar way. Hot air balloons have a burner beneath a balloon envelope. The burner uses propane to heat the air inside the balloon envelope. As the air inside the balloon envelope is heated, the air inside the balloon envelope becomes less dense. As a result, the less dense (heated) air in the balloon envelope rises above the denser (cooler) air surrounding the balloon.
Floating Tea Bag |
Chimpanzees have 'at least 66 distinct gestures' they use to talk to each other
Wild chimpanzees use more than twice the number of gestures to communicate than previously thought, scientists have said.
The animals have at least 66 different mannerisms that they use to talk to each other, according to researchers from the University Of St Andrews in Scotland.
It was previously thought that chimpanzees had just 30 distinct gestures, although this figure was arrived at following observations of animals in captivity.
Communicative: Chimpanzees have at least 66 different mannerisms that they use to talk to each other, according to researchers
Lead researcher Dr Catherine Hobaiter and her team spent two years analysing 120 hours of footage of chimpanzees interacting in Budongo Conservation Field Station, Uganda.
They closely studied the animals' mannerisms for repeat gestures and concluded that they have a 'large repertoire'.
'We think people previously were only seeing fractions of this,' Dr Hobaiter told the BBC. 'Because when you study the animals in captivity you don't see all their behaviour.
'You wouldn't see them hunting for monkeys, taking females away on "courtships", or encountering neighbouring groups of chimpanzees.'
The team spent so long in the chimps' company that they got to know each other and the animals 'got on with their daily lives'.
They found that the chimpanzees clearly beckoned to each other.
At play: The scientists found that the chimpanzees clearly beckoned to each other and behaved remarkably like humans at times
In one piece of footage, a mother gestures for her daughter to climb on her back; in another, a child holds another young chimp's hand to encourage it to play.
The study suggests that there is a common system of communication across the species, as opposed to there being individual gestures for each group.
Not only that, but there is a significant overlap in signals used by gorillas and orangutans.
The research is published in the journal Animal Cognition.
Scientists have previously shown that chimpanzees comfort the victims of bullies with a consoling hug and a reassuring peck on the cheek to help lower stress levels.
It was found that chimps comfort each other after fighting and are less stressed after a cuddle.
Most watched News videos
- Mom sobs after AA flight attendant 'violently took her stroller'
- New video shows aftermath when bouncer punched woman
- Man selflessly carries fellow runner over marathon finish line
- Pit bulls maul man as he saves his own dog from vicious attack
- Man punches woman as she tries to intervene in street fight
- Dramatic footage of fight between Somali pirates and cargo ship
- William, Harry and Kate cheer on London marathon runners
- Terrified Chinese student mauled by a cheetah on guided tour
- John Trode tells off chef Lisa Allen for nearly double dipping
- Video shows the terrifying moment a 4-year-old falls from a van
- Star struck!: Man dressed as star takes selfie with William
- School of sharks in feeding frenzy close to shore in Fingal Bay |
Your body is made of about 10 trillion cells. The largest human cells are about the diameter of a human hair, but most human cells are smaller -- perhaps one-tenth of the diameter of a human hair.
Run your fingers through your hair now and look at a single strand. It is not very thick -- maybe 100 microns in diameter (a micron is a millionth of a meter, so 100 microns is a tenth of a millimeter). A typical human cell might be one-tenth of the diameter of your hair (10 microns). Look down at your little toe -- it might represent 2 or 3 billion cells or so, depending on how big you are. Imagine a whole house filled with baby peas. If the house is your little toe, the peas are the cells. That's a lot of cells!
Bacteria are about the simplest cells that exist today. A bacteria is a single, self-contained, living cell. An Escherichia coli bacteria (or E. coli bacteria) is typical -- it is about one-hundredth the size of a human cell (maybe a micron long and one-tenth of a micron wide), so it is invisible without a microscope. When you get an infection, the bacteria are swimming around your big cells like little rowboats next to a large ship.
Bacteria are a lot simpler than human cells. A bacterium consists of an outer wrapper called the cell membrane, and inside the membrane is a watery fluid called the cytoplasm. Cytoplasm might be 70-percent water. The other 30 percent is filled with proteins called enzymes that the cell has manufactured, along with smaller molecules like amino acids, glucose molecules and ATP. At the center of the cell is a ball of DNA (similar to a wadded-up ball of string). If you were to stretch out this DNA into a single long strand, it would be incredibly long compared to the bacteria -- about 1000 times longer!
An E. coli bacterium has a distinctive, capsule shape. The outer portion of the cell is the cell membrane, shown here in orange. In E. coli, there are actually two closely-spaced membranes protecting the cell. Inside the membrane is the cytoplasm, made up of millions of enzymes, sugars, ATP and other molecules floating freely in water. At the center of the cell is its DNA. The DNA is like a wadded-up ball of string. There is no protection for the DNA in a bacterium -- the wadded-up ball floats in the cytoplasm roughly in the center of the cell. Attached to the outside of the cell are long strands called flagella, which propel the cell. Not all bacterium have flagella, and no human cells have them besides sperm cells.
Human cells are much more complex than bacteria. They contain a special nuclear membrane to protect the DNA, additional membranes and structures like mitochondria and Golgi bodies, and a variety of other advanced features. However, the fundamental processes are the same in bacteria and human cells, so we will start with bacteria. |
1.What are the four parts that make up the urinary system?
2.What are the two main functions of the kidneys?
3.How do the kidneys maintain fluid balance in the body?
4.How many kidneys are there in the body and where are they situated?
5.What is the name of the capsule covering the kidney?
6.Where are the adrenal glands attached?
7.Where does urine collect before passing down the ureters?
8.Give a brief outline to the kidney nephrons.
9.What are the three processes employed by the kidney in the production of urine?
10.What process controls the secretion of urine?
11.Where is the bladder situated in the body?
12.Give a brief outline to the process of urination.
13.What is the main function of the bladder?
14.What substances does urine consist of?
15. What is the condition Cystitis?
16. Explain how antidiuretic hormone (ADH) controls the amount of water removed from the body. In very hot, dry conditions (low humidity), a person can drink large amounts of fluid, but still excrete small quantities of hypertonic urine – why is this?
17. What is the term used for Kidney failure?
18. Explain what is meant by incontinence and its causes. |
Your task is to find out how many individual pieces of chocolate, there are ALTOGETHER in these blocks. Use your knowledge of arrays and multiplication to find the answer. You may use a calculator to help you. 1.Work it out on paper and take a of your workings, OR If it is easier you may work it out using the tools on Seesaw. 2. Use the tool to write the sum. 3. Use the and any other tools you need to explain your reasoning. 4. Remember to add it to this activity by tapping in THIS activity first. |
'n Analise van die reflektiewe vermoëns van effektiewe en oneffektiewe leerders in rekenaarprogrammering
Breed, Elizabeth Alice
MetadataShow full item record
As a result of the interactive nature of modern programming languages the perception has developed that proper planning of a solution, reasoned action during the process of problem solving and evaluation of the solution have become less important during computer programming. Learn often rely on the programming language to help them solve a problem, without themselves planning the solution beforehand and then using a computer language to implement the solution. This approach usually leads to using bad programming techniques, resulting in unstructured programmes or rendering the learner unable to solve the problem. The importance of continuous reflection by learners while doing a programming activity has been advocated for quite some time. The extent to which learners possess and apply the necessary knowledge cognitive skills and meta-cognitive skills, such as reflection, contributes to effective learning in computer programming and subsequently determines their performance In computer programming. This research was done to analyse the reflective thinking activities of respectively university students and secondary school learners in computer programming while they are involved in programme development. The literature study investigates different views on effective learning, with special emphasis on the role of reflection in problem solving and effective learning in computer programming. It also investigates other factors that also may influence effective learning in computer programming. The empirical study is aimed at determining the extent to which effective and ineffective learners are engaged in reflective thinking before beginning to code a computer programme, while they are busy working on it and after they have finished the programme. Analysis of the reflective abilities of the learners in each of these three phases of completion of a programming activity leads to inferences regarding the extent of continuous reflection necessary to enhance effective learning in computer programming. This knowledge can provide teachers with guidelines on how teaching approaches and strategies may be adapted in order to develop learners' reflective thinking skills that can enhance their effectiveness as computer programmers.
- Education |
12 Word Connotation
Suggested Course Level
Either lower level or upper level undergraduate
- Students will learn how connotations can be context or audience-dependent and sometimes lead to miscommunication.
- Students will get to know each other and see how the experience/values impact how they interpret words.
- Print off the handout and cut the table into columns, then cut up each word. This will create five piles of 12 different words.
- Put each pile into an envelope.
- Break the class into five groups.
- Ask them to take the words out of the envelope then sort them from most to least casual.
- Do not give them any further instructions. You can decide whether students are allowed to use their phones to look up words they don’t know.
- When each group has agreed on the order of the words, get one student from each group to write the order of their words on the board.
- Ask students to identify trends: what words groups agreed on and what words have wildly different orders. Usually, every group will come up with a different ranking: often because they interpreted the word “casual” differently (a casual relationship versus a casual word).
Debrief Questions / Activities
- How did your group decide how to rank the words?
- Was there any disagreement in the group?
- Did the words mean the same to everyone?
- Why do you think different groups came up with different answers?
- If you didn’t understand a word, what did you do?
- How did you interpret the meaning of the word casual?
- What do you think this exercise tells us about miscommunication?
- Instead of using a word that relates to relationships, pick a job that has many different titles and ask students to sort them by which are the highest paying to lowest paying. For example, students often say that a secretary is lower-paid than an administrative assistant, which leads to some interesting discussions about the impact of gendered language.
Additional Resources / Supplementary Resources
Tags: negative news messages, writing mechanics, grammar, style, tone, concision, hands-on, small group, self-reflection, connotations, ice breaker, getting to know you, positive emphasis |
In these lessons, we will learn
The following table gives a summary of complementary and supplementary angles. Scroll down the page if you need more explanations about complementary and supplementary angles, videos and worksheets.
Two angles are called complementary angles if the sum of their degree measurements equals 90 degrees (right angle). One of the complementary angles is said to be the complement of the other.
The two angles do not need to be together or adjacent. They just need to add up to 90 degrees. If the two complementary angles are adjacent then they will form a right angle.
|∠ABC is the complement of ∠CBD|
|In a right triangle, the two acute angles are complementary. This is because the sum of angles in a triangle is 180˚ and the right angle is 90˚. Therefore, the other two angles must add up to 90˚.|
x and y are complementary angles. Given x = 35˚, find the value y.
x + y = 90˚
35˚ + y = 90˚
y = 90˚ – 35˚ = 55˚
Two angles are called supplementary angles if the sum of their degree measurements equals 180 degrees (straight line) . One of the supplementary angles is said to be the supplement of the other.
The two angles do not need to be together or adjacent. They just need to add up to 180 degrees. If the two supplementary angles are adjacent then they will form a straight line.
x and y are supplementary angles. Given x = 72˚, find the value y.
x + y = 180˚
72˚ + y = 180˚
y = 180˚ –72˚ = 108˚
A mnemonic to help you remember:
The C in Complementary stands for Corner, 90˚
The S in Supplementary stands for Straight, 180˚
Have a look at the following videos for further explanations of complementary angles and supplementary angles:
This video describes complementary and supplementary angles with a few example problems. It will also explain a neat trick to remember the difference between complementary and supplementary angles.
Step 1: Make sure that the angles are complementary.
Step 2: Setup a solvable equation.
Step 3: Solve the equation.
∠1 = 8x + 6<br ∠2 = 19x + 3
∠1 and ∠2 are complementary.
Solve for x.
Complementary Word Problem
How to solve a word problem about its angle and its complement?
The measure of an angle is 43° more than its complement. Find the measure of each angle.
What it means for angles to be complementary and supplementary and do a few problems to find complements and supplements for different angles.
Find the measure of the complementary angle for each of the following angles:
Find the measure of the supplementary angle for each of the following angles:
Create a system of linear equations to find the measure of an angle knowing information about its complement and supplement.
The supplement of angle y measures 12x + 4 and the complement of the angle measures 6x. What is the measure of the angle?
Try the free Mathway calculator and
problem solver below to practice various math topics. Try the given examples, or type in your own
problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. |
The term ‘Human Factors’ refers to the application of scientific knowledge, mostly from the human sciences of psychology, anthropology, physiology and medicine, to the design, construction, operation, management and maintenance of products and systems. The purpose of the application of this knowledge is to attempt to reduce the likelihood of human error and therefore the likelihood of negative outcomes while operating or using products or systems.
Most aircraft accidents and incidents are the result of errors (including slips and lapses) made by the people responsible for operating the aviation system. These people could be pilots, air traffic controllers, maintenance staff or executive managers of the various aviation organisations. Some of the errors committed by these people are the result of deliberate violations of rules and procedures. However, even the majority of errors resulting from violations do not come from any intent to harm anyone or commit a crime.
Some people believe that if a human is given a reasonable task to complete and they are adequately trained, then the individual should be able to repeatedly perform the task without error. However, applied research and accident investigation reports from around the world demonstrate that this view is incorrect. Competent humans conducting even simple tasks continually make errors, but in most cases, they recognise the errors they have made and correct them before any consequence of the errors is realised. In a small number of cases, they fail to either recognise the errors or fail to correct them before the consequences of the errors are realised.
Contemporary human factors application is now as much about understanding how groups of people, be they flight crew, cabin crew, maintenance staff, air traffic controllers or senior management teams operate, and why they make decisions and behave in particular ways, as it is about individuals. It is also about viewing accidents as part of the overall complex system which supported all the aspects of the operation. As such, it is about understanding how organisations manage risk and balance their safety obligations with their business imperatives.
Human factors incorporate a broad and complex body of applied knowledge aimed at more clearly understanding why errors occur.
ICAO (the International Civil Aviation Organisation) uses the SCHELL
model to represent the main components of human factors. SCHELL is an expanded version of this model. The SCHELL model gives an idea of the scope of human factors.
SCHELL stands for:
S = software: the procedures and other aspects of work design C = culture: the organisational and national cultures influencing interactions H = hardware: the equipment, tools and technology used in work E = environment: the environmental conditions in which work occurs L = liveware: the human aspects of the system of work L = liveware: the interrelationships between humans at work
The SCHELL model emphasises that the whole system shapes how individuals behave. Any breakdown or mismatch between two or more components can lead to human performance problems. For example, an accident where communication breaks down between pilots in the cockpit, or engineers at shift handover, would be characterised by the SCHELL model as a liveware-liveware problem. Situations where pilots or engineers disregarded a rule, would be characterised as liveware-software.
It is then not surprising that a modern definition of airmanship includes knowledge of the whole system:
“Airmanship is the consistent use of good judgment and well-developed skills to accomplish flight objectives. This consistency is founded on a cornerstone of uncompromising flight discipline and is developed through systematic skill acquisition and proficiency. A high state of situational awareness completes the airmanship picture and is obtained through knowledge of one’s self, aircraft, environment, team and risk.”
(from Redefining Airmanship. Tony Kern. 1996) |
“fair weight.” The literal Hebrew is an idiom, “a stone of peace,” but that would not make sense in English. A “stone of peace” was a just and true weight.
In the ancient world most goods were exchanged by using a balance and stone weights. A merchant would have a balance, which was usually a stick with a cord in the middle that he held on to, and on each end of the stick was a cord that went down to a pouch or pan. (The iconic image of “Lady Justice” that appears in many courthouses in the USA is a blindfolded woman holding out a balance).
Traveling merchants would carry the balance with them, and also carry their “weighing stones,” which they used in buying and selling, which were stones of different weight (1 shekel; 5 shekels; 20 shekels; etc.). The weights that were used by merchants in Old Testament times were usually made of stone; metal weights were not common.
When buying or selling, the merchant would place the item being bought or sold, for example wheat, in one pan and his weighing stones in the other pan, and adjust either the amount of wheat or the stones until the wheat and stones “balanced,” at which point the weight and thus value of the wheat was known.
Unscrupulous merchants often kept different stones in their bag that only they could easily tell apart, stones that were a little heavier for buying and stones that were a little lighter for selling, so that they bought a lot and sold a little. But that kind of dishonest dealing is an abomination to Yahweh. Yahweh commanded traders to use honest weights and measures, which gave people what they deserved in a business deal (Lev. 19:35). In ancient Israel, it was the job of the Levites to maintain the standard weights and measures that merchants could use to standardize their own weights and measures so people got a fair deal.
In modern times “balances” have been mostly replaced by “scales.” A balance is accurate, but it took considerable time and tweaking to get both sides of the balance to be the same weight so it would balance out and be level. Besides that, sometimes a person would have to buy or sell a little more or less than they really wanted because the stone weights were set amounts and the person had to add or subtract a little wheat to make the balance level out. Today, stores use scales for weighing that use different ways of producing known resistance to weight, for example, many scales use springs. Grocery stores use scales to weigh meat and vegetables. In scientific terms, a balance measures relative mass, comparing one object to another, while a scale measures the weight of an object using resistance to gravity. The subject of balances and scales can be somewhat confusing because often “balances” are called “scales,” but technically they are not.
There was enough dishonesty in ancient dealings that God spoke about being honest several different times (cp. Deut. 25:13-16; Prov. 11:1; 16:11; 20:10, 23; Ezek. 45:10; Hosea 12:7; Amos 8:5; Micah 6:11). |
Learn all about economic growth in just a few minutes! Professor Jadrian Wooten of Penn State University explains economic growth and how to calculate rates of growth.
Economic growth occurs when a nation's output is increasing over time.
When an economy grows, it increases its ability to produce goods and services. Not all goods and services are valued equally, so this ability must be measured by the value of products produced, not just the volume. Gross domestic product (GDP) measures the monetary value of final goods and services produced within a country in a specific period of time. Real gross domestic product (real GDP) is the value of GDP in constant dollars, meaning that the value of the dollar has already been corrected for inflation and therefore reflects the same ability to purchase goods and services. Economic growth is the rate of increase in real GDP from one year to the next. It occurs when a nation's GDP rises over time; it is a long-term trend as opposed to short-term fluctuations in economic output. It is generally measured as the growth rate of real GDP from one year () to the next (). For example, if the real GDP in the United States increased from $33 billion in 1998 to $34.5 billion in 1999, then it increased by $1.5 billion. Then the rate at which real GDP grew would be .
It is important to use real GDP to measure economic growth because real GDP shows that the actual output, goods or services produced within a given time frame by a business or country, has risen. An increase in nominal GDP (the total gross domestic product expressed in current year prices) might mean that output has risen, but it could also mean that prices have gone up instead. Therefore, prices must be held constant when looking at output over time. A positive economic growth rate means that a country had an increase in its real GDP, while a negative growth rate means that output fell. Since 1930 the economic growth rate for the United States has averaged 3.3% per year, but the rate itself has varied dramatically, from to a low of –12.9% in 1932 (during the Great Depression) to a maximum of 18.9% in 1942 (during the height of World War II). Since the Postwar period, the economic growth rate for the United States has become more stable, relatively speaking. There have been no large spikes or dips in the growth rate because there have been no wars or economic crises of the same scale, and the development of new industries and increased worker productivity have led to consistent growth. It is also possible to compare growth rates across countries and see the effects of different economic strategies. For example, in 2016 India had a growth rate of 7.1%, which has been attributed to a sharp increase in investment and increased worker productivity. During that same time period the United States, which has a more established economy and therefore does not see the same scale of fluctuations, experienced only 1.6% growth. Brazil, however, had a growth rate of –3.6% because of a series of scandals in the public and private sectors and high inflation that caused economic uncertainty. |
Demand and Supply Equilibrium
Do some research online and find a newspaper article (in the past 6 months) that represents a situation where there is a change in demand or supply of a good or service. Summarize the article in your own word then use the concepts you have learned to explain what will happen to equilibrium in the market.
You MUST ATTACH THE URL. of the article.
Briefly explain the situation (in 1-2 lines).
Then spend most of your time relating the article's events to at least 1 determinant of demand (OR supply) learned in the chapter and state in what direction demand (OR supply) will shift.
Be very explicit in identifying the determinant responsible for the curve shift and in explaining why the curve will move as you predict.
As an economist, what impact do you predict this change will have on equilibrium price and quantity?
Put this together in a logical progression and be sure to use paragraphs. |
Although we cannot see black holes, we can detect or guess the presence of one by measuring its effects on objects around it. The following effects may be used:
- Mass estimates from objects orbiting a black hole or spiraling into the core
- Gravitational lens effects
- Emitted radiation
Many black holes have objects around them, and by looking at the behavior of the objects you can detect the presence of a black hole. You then use measurements of the movement of objects around a suspected black hole to calculate the black hole's mass.
What you look for is a star or a disk of gas that is behaving as though there were a large mass nearby. For example, if a visible star or disk of gas has a "wobbling" motion or spinning AND there is not a visible reason for this motion AND the invisible reason has an effect that appears to be caused by an object with a mass greater than three solar masses (too big to be a neutron star), then it is possible that a black hole is causing the motion. You then estimate the mass of the black hole by looking at the effect it has on the visible object.
For example, in the core of galaxy NGC 4261, there is a brown, spiral-shaped disk that is rotating. The disk is about the size of our solar system, but weighs 1.2 billion times as much as the sun. Such a huge mass for a disk might indicate that a black hole is present within the disk.
Einstein's General Theory of Relativity predicted that gravity could bend space. This was later confirmed during a solar eclipse when a star's position was measured before, during and after the eclipse. The star's position shifted because the light from the star was bent by the sun's gravity. Therefore, an object with immense gravity (like a galaxy or black hole) between the Earth and a distant object could bend the light from the distant object into a focus, much like a lens can. This effect can be seen in the image below.
In the image, the brightening of MACHO-96-BL5 happened when a gravitational lens passed between it and the Earth. When the Hubble Space Telescope looked at the object, it saw two images of the object close together, which indicated a gravitational lens effect. The intervening object was unseen. Therefore, it was concluded that a black hole had passed between Earth and the object.
When material falls into a black hole from a companion star, it gets heated to millions of degrees Kelvin and accelerated. The superheated materials emit X-rays, which can be detected by X-ray telescopes such as the orbiting Chandra X-ray Observatory.
The star Cygnus X-1 is a strong X-ray source and is considered to be a good candidate for a black hole. As pictured above, stellar winds from the companion star, HDE 226868, blow material onto the accretion disk surrounding the black hole. As this material falls into the black hole, it emits X-rays, as seen in this image:
In addition to X-rays, black holes can also eject materials at high speeds to form jets. Many galaxies have been observed with such jets. Currently, it is thought that these galaxies have supermassive black holes (billions of solar masses) at their centers that produce the jets as well as strong radio emissions. One such example is the galaxy M87 as shown below:
It is important to remember that black holes are not cosmic vacuum cleaners -- they will not consume everything. So although we cannot see black holes, there is indirect evidence that they exist. They have been associated with time travel and worm holes and remain fascinating objects in the universe.
Originally Published: Nov 26, 2006 |
What is it?
Vibrio (all non-cholera species incl. vibrio parahaemolyticus and vibrio vulnificus) is a curved-rod shaped bacteria in the same family with cholera that is usually found in saltwater and linked to improperly cooked seafood. Non-cholera vibrio infections have increased in recent years, perhaps because of warming ocean water temperatures and increased salinity. The CDC estimates 45,000 cases per year in the United States, but under-reporting is suspected. Although many other types of foodborne illnesses have decreased in recent years, vibrio cases increased as much as 115% between 1998 and 2010. Coastal regions, especially the Gulf Coast have the most reported cases.
How is it spread?
The majority of vibrio sufferers report eating seafood such as clams, oysters, crabs, and other shellfish. It can also be contracted by ingesting sea water, most likely in the hot summer or early fall months. Natural disasters like Hurricane Katrina contribute to spreading because of contaminated flood water.
Fever, abdominal cramps, headache, vomiting, diarrhea, bloody stool, and myalgia.
Most at risk
Young children, pregnant women, the elderly, and immunocompromised persons are most at risk (especially those with liver disease), but given the right conditions, anyone can suffer this illness.
For more information, see: |
If you’re just starting out in the artificial intelligence (AI) world, then Python is a great language to learn since most of the tools are built using it. Deep learning is a technique used to make predictions using data, and it heavily relies on neural networks. This course will show you how to build a neural network from scratch.
In a production setting, you would use a deep learning framework like TensorFlow or PyTorch instead of building your own neural network. That said, having some knowledge of how neural networks work is helpful because you can use it to better architect your deep learning models.
In this course, you’ll learn:
- What artificial intelligence is
- How both machine learning and deep learning play a role in AI
- How a neural network functions internally
- How to build a neural network from scratch using Python |
- New research details the first ever conversion of a non-magnetic material into a permanent magnet using electricity.
- The researchers seek cheaper, more plentiful magnetic materials to use in solar panels.
- Electricity and electrolytes effectively rearrange the surface chemistry of the iron sulfide.
Iron sulfide, better known as pyrite or fool's gold, could have a new lease on the high life after researchers turned it into a magnet using an electrical treatment. Physicists and chemical engineers from the University of Minnesota and more collaborated on the new research, which they say points the way toward a new kind of solar panel material made from abundant, low-cost sulfur.
University of Minnesota explains in a statement:
“In the study, the researchers used a technique called electrolyte gating. They took the non-magnetic iron sulfide material and put it in a device in contact with an ionic solution, or electrolyte, comparable to Gatorade. They then applied as little as 1 volt (less voltage than a household battery), moved positively charged molecules to the interface between the electrolyte and the iron sulfide, and induced magnetism.”
What’s neat is how this reaction itself mimics magnetism. The iron sulfide is touched to the ionic solution and then gently electrified, and in the ensuing reaction, the positively charged (and magnetically viable) molecules gather along the electrified electrolyte surface.
Overall, it’s kind of like using an electromagnetic process, but in this case, the change is permanent and doesn’t require further current. The researchers say their findings are the first time electricity has induced a permanent change in magnetism. Lead researcher Chris Leighton explains:
"By applying the voltage, we essentially pour electrons into the material. It turns out that if you get high enough concentrations of electrons, the material wants to spontaneously become ferromagnetic, which we were able to understand with theory. This has lots of potential. Having done it with iron sulfide, we guess we can do it with other materials as well."
The experiment resulted from a research group whose two broad interests intersected at one critical point. They want to improve photovoltaic solar cell technology by broadening the number of low-cost candidate materials and technologies, and they also research the process of inducing longer-lasting magnetism in (until now) materials with at least some magnetism to begin with. That field is called magnetoionics—magneto for magnets and ionics for the way we must rearrange the ions in order to create magnetism.
Keen-eyed observers may wonder why an iron compound isn’t magnetic to begin with, since iron is one of the most magnetic elements. Common magnetism is even shortened from ferromagnetism, referring specifically to iron. But when you add other elements, in this case sulfur, the effect is diminished or extinguished completely. In fact, the University of Minnesota chemistry department explains it beautifully alongside instructions for turning iron into iron sulfide:
“Before the reaction the test tube will be strongly attracted to a magnetic field due to the ferromagnetism of the elemental iron. After the reaction, assuming you completely use up the iron, the only magnetic attraction can come from the paramagnetic attraction of the iron(II) as part of the compound. Iron(II) is a d6 ion, which, depending upon the spin state (high spin or low spin), will have five or one unpaired electrons. Regardless of the spin state, paramagnetism is a much weaker force than ferromagnetism and so there will be a much smaller attraction of the test tube to a magnetic field.”
In the past, the only way to reverse the reduction in magnetism from pure iron to iron sulfide would be to separate the elements. Now, you might dunk it in an electrified Gatorade bath instead. (Preferred flavor? Icy Charge.) |
The post AutoQuiz: How to Measure the Temperature Between Two Dissimilar, Joined Metals first appeared on the ISA Interchange blog site.
This automation industry quiz question comes from the ISA Certified Automation Professional (CAP) certification program. ISA CAP certification provides a non-biased, third-party, objective assessment and confirmation of an automation professional’s skills. The CAP exam is focused on direction, definition, design, development/application, deployment, documentation, and support of systems, software, and equipment used in control systems, manufacturing information systems, systems integration, and operational consulting. Click this link for more information about the CAP program.
b) bimetallic expansion thermometer
d) IR thermometer
e) none of the above
Answer A is not correct: An RTD measures temperature by correlating the resistance of the RTD element with temperature. As the temperature increases, the resistance increases.
Answer B is not correct: Bimetallic expansion thermometer converts a temperature change into mechanical displacement. The strip consists of two strips of different metals, which expand at different rates as they are heated.
Answer D is not correct: IR Thermometers infer temperature using a portion of the thermal radiation sometimes called blackbody radiation emitted by the object of measurement.
The correct answer is C, thermocouple. A thermocouple is a device consisting of two different conductors (usually metal alloys) that produce a voltage proportional to a temperature difference between the ends of the pair of conductors according to the equation:
V = a(Th-Tc)
The voltage difference, V, produced across the terminals of an open circuit made from a pair of dissimilar metals, A and B, whose two junctions are held at different temperatures, is directly proportional to the difference between the hot and cold junction temperatures, Th − Tc. A voltage or current is produced across the junctions of two different metals, caused by the diffusion of electrons from high electron density region to low electron density region. This diffusion of electrons occurs because the density of electrons is different in different metals.
Reference: Bela Liptak, Instrument Engineers’ Handbook, Fourth Edition
Source: ISA News |
Nathalia Lilas May 30, 2021 Worksheets
The addition, subtraction & number counting worksheets are meant for improving & developing the IQ skills of the kids, while English comprehension & grammar worksheets are provided to skill students at constructing error free sentences. The 1st grade worksheets can also used by parents to bridge between kindergarten lessons & 2nd grade program. It happens on many occasions that children forget or feel unable to recollect the lessons learnt at the previous grade. In such situations, 1st grade worksheets become indispensable documents for parents as well as students.
The Worksheet Deactivate event is similar to the Worksheet Deactivate event; it also works on many different versions of Excel. This event is designed to run a script of code when a user selects any other worksheet. This event has no required or optional parameters. If the first worksheet is selected and someone selects another worksheet, than the first worksheet will run its Deactivate event. This can be used to hide unused worksheets after they are done be used. The Worksheet Before Double Click event will run a script of code when a user double clicks on that specific worksheet. This event will work on all versions of Excel. This can be useful if you want to run a macro for a certain cell every time you double click on that cell. You can also use this event to load a macro any time you double click anywhere in the worksheet.
Teachers and parents basically are the primary users of worksheets. It is an effective tool in helping children learn how to write. There are many types of writing worksheets. There is the cursive writing worksheets and the kindergarten worksheets. The latter is more on letter writing and number writing. This is typically given to kids of aged four to seven to first teach them how to write. Through these worksheets, they learn muscle control in their fingers and wrist by repeatedly following the strokes of writing each letter. These writing worksheets have traceable patterns of the different strokes of writing letters. By tracing these patterns, kids slowly learn how a letter is structured.
Although preschool workbooks are popular, many parents like the convenience that is associated with printable preschool worksheets. The only problem with printable preschool worksheets is that they can use up a lot ink, especially if the selected preschool worksheets are in colored ink. To save yourself money and printer ink, you may want to search for preschool worksheets that are in black and white. If you are unable to find black and white preschool worksheets that you like, you can still go for the colored ones; however, you may want to consider adjusting your printer settings. Instead of having them print off in colored ink, you may want to adjust your settings to gray scale.
Microsoft Excel Worksheets have built-in events that can run visual basic code based on certain action taken by the user within that specific worksheet. These worksheet events allow the users of Microsoft Excel to run code after activating a worksheet or before deactivating a worksheet. These events also allow users the ability to run a code every time a user changes data within a cell or selects a new range of cells. Newer versions of Excel have even created events that allow code to be run when tables and pivot tables are updated or refreshed. The Worksheet Activate event is a Microsoft Excel event that works on many different versions of Excel. It designed to run a script of code every time the specific worksheet is activated. This event has no required or optional parameters. This event can be used to show a hidden a worksheet upon its activation or it can pop up a login or data form.
1st grade worksheets are used for helping kids learning in the first grade in primary schools. These worksheets are offered by many charitable & commercial organizations through their internet portals. The worksheets provide study materials to kids in a funky & innovative way, to magnetize them towards learning. These worksheets are provided for all subjects present in a 1st grade school curriculum covering English, math, science & many others. Worksheets are also provided for developing & nurturing the thinking skills of a student too in the form of crossword puzzle & thinking skill worksheets. Moreover, many 1st grade worksheet providers as well provide time counting & calendar worksheets as well to test the IQ of the kids.
Tag Cloudtrig graphing worksheet quadrilateral review worksheet reading comprehension worksheet for kindergarten pdf naming angles worksheet mystery media answers math worksheet finder active worksheet vba ideal gas law worksheet phys.ed.worksheets matching worksheet fact vs opinion worksheet fish worksheet boy women worksheet for kindergarten lovely planner multiplying and dividing rational expressions worksheet answers 3d shapes worksheet for kindergarten math activities worksheet for kindergarten gingerbread math worksheet anger triggers worksheet substitution worksheet stating girl |
Epilepsy affects the brain, causing repeated and uncontrollable surges of electrical activity resulting in alteration in brain function and, ultimately, seizures. Having a seizure does not necessarily mean a child has epilepsy and sometimes these seizures can be one-offs. Epilepsy can start at any age, but it is more prevalent amongst children and the elderly.
There are many different types of seizures to consider:
- The most well-known is a subtype called a Tonic-clonic seizure. This starts with loss of consciousness, followed by the body becoming stiff and then ‘seizing’ or jerking. It is also typical for tongue biting or loss of bladder control.
- Absence seizures, most commonly starting during childhood, entail a loss of expression, staring, unresponsiveness and sudden stopping of any current activity. Sometimes eye blinking can be noticed. The recovery is immediate and the child often has no recollection of the seizures.
- Focal seizures begin in one part of the brain and affect the part of the body controlled by this area. This often results in unusual movements or sensations and behaviours.
GlobMed continues to work hard to find the very best in neuro-paediatrics ensuring you and your child get access to the highest quality care. We have specialists that can help in initial diagnosis, continued treatment or novel treatments in uncontrolled epilepsy.
Treatment usually requires a multi-faceted approach with anti-epileptic medication (AED) and other treatment, helping to identify any causative and exacerbating factors.
We can arrange consultations to discuss the effect epilepsy has on your life and the impact a diagnosis can have, helping you manage each day. In addition, our network has access to state-of-the-art MRI and CT scanners and world-class testing labs encompassing pathology and genetic labs. |
A fire engine (also known in some territories as a fire truck or firefighter) is a vehicle designed primarily for fire-fighting operations. The terms “fire engine ” and “Fire truck ” are often used alternately; However, some fire departments/fire services refer to separate and specific types of vehicles.
The primary purposes of a fire engine include transporting firefighters to an incident scene, providing water to fight a fire, and bringing other equipment needed by firefighters. Specialty appliances are used to provide hazardous materials mitigation and technical rescue. A typical modern fire engine will carry tools for a wide range of firefighting tasks, with common equipment, including a pump, a water tank, hoses, ground ladders, hand tools, breathing apparatus Kits First aid and self-contained.
Many fire engines are based on standard vehicle models (although some parts can be promoted to meet vehicle-use requirements). They are usually equipped with sound and visual warnings, as well as communication equipment such as two-way radios and mobile computer technology.
The standard fire engine is a device designed primarily for fire-fighting operations. The main goal of the engine is to transport firefighters to the scene, providing a limited supply of water to fight the fire, and carrying tools, equipment and hoses needed for firefighters. The tools transported in the fire engine vary greatly depending on many factors, including the size of the department and what type of terrain the department should treat. For example, departments located near large bodies of water or rivers probably have some kind of water rescue equipment. Standard tools found in almost all fire engines include ladders, hydraulic rescue tools (often referred to as life Jaws), spotlights, fire hoses, fire extinguishers, breathing apparatus. Thermal imaging cameras and contained. |
Grade 7 | Mathematics
NOTE: The mathematics standards apply to students in grade 7.
In Grade 7, instructional time should focus on four critical areas: (1) developing understanding of and applying proportional relationships; (2) developing understanding of operations with rational numbers and working with expressions and linear equations; (3) solving problems involving scale drawings and informal geometric constructions, and working with two– and three–dimensional shapes to solve problems involving area, surface area, and volume; and (4) drawing inferences about populations based on samples. (1) Students extend their understanding of ratios and develop understanding of proportionality to solve single– and multi–step problems. Students use their understanding of ratios and proportionality to solve a wide variety of percent problems, including those involving discounts, interest, taxes, tips, and percent increase or decrease. Students solve problems about scale drawings by relating corresponding lengths between the objects or by using the fact that relationships of lengths within an object are preserved in similar objects. Students graph proportional relationships and understand the unit rate informally as a measure of the steepness of the related line, called the slope. They distinguish proportional relationships from other relationships. (2) Students develop a unified understanding of number, recognizing fractions, decimals (that have a finite or a repeating decimal representation), and percents as different representations of rational numbers. Students extend addition, subtraction, multiplication, and division to all rational numbers, maintaining the properties of operations and the relationships between addition and subtraction, and multiplication and division. By applying these properties, and by viewing negative numbers in terms of everyday contexts (e.g., amounts owed or temperatures below zero), students explain and interpret the rules for adding, subtracting, multiplying, and dividing with negative numbers. They use the arithmetic of rational numbers as they formulate expressions and equations in one variable and use these equations to solve problems. (3) Students continue their work with area from Grade 6, solving problems involving the area and circumference of a circle and surface area of three–dimensional objects. In preparation for work on congruence and similarity in Grade 8 they reason about relationships among two–dimensional figures using scale drawings and informal geometric constructions, and they gain familiarity with the relationships between angles formed by intersecting lines. Students work with three–dimensional figures, relating them to two–dimensional figures by examining cross–sections. They solve real–world and mathematical problems involving area, surface area, and volume of two– and three–dimensional objects composed of triangles, quadrilaterals, polygons, cubes and right prisms. (4) Students build on their previous work with single data distributions to compare two data distributions and address questions about differences between populations. They begin informal work with random sampling to generate data sets and learn about the importance of representative samples for drawing inferences. |
When Frank Drake was a boy, growing up in 1930s Chicago, his parents, observant Baptists, enrolled him in Sunday School. By the time he was 8 years old, he suspected his religion, and others around the world, were, to some extent, environmentally determined—local chance events helped shape them. He began to think the same might be true of civilization, for humans and, perhaps, aliens as well—but he thought it better to keep these thoughts to himself.
But not for long: He would go on to found S.E.T.I., the Search for Extraterrestrial Intelligence, and laid out a simple way to estimate the number of civilizations within our galaxy that we could hope to listen-in on. It’s an equation that looks like this:
N (the number of communicable civilizations in the Milky Way)
= R (the rate at which stars form)
× NEarth (the fraction of stars with Earth-sized planets on Earth-like orbits)
× FLife (the fraction of those planets that develop life)
× FIntelligence (the fraction with intelligent life)
× FCommunication (fraction that can communicate)
× L (the average civilization’s lifetime)
In short, N = R × NEarth × FLife × FIntelligence × FCommunication × L. To determine the value of N, we just need to know the other numbers.
We know that the Milky Way makes a couple of new stars per year, so R is taken care of—but that’s it. We have no idea how common life, intelligence, or the ability to communicate are. And while we may all be rooting for the average civilization’s lifetime to be very long, we have no data.
But we are making progress on NEarth (also called “Eta-Earth”). The first Earth-sized planet orbiting another sun was discovered in 2010. Thanks in large part to NASA’s Kepler Space Telescope, we now know of hundreds of Earth-sized worlds, and a handful as small as Mars and Mercury.
Kepler’s primary mission was to determine the abundance of Earth-sized planets orbiting at Earth-like distances around Sun-like stars. This is just NEarth for stars like our sun. But NEarth might be different for different types of stars. Unfortunately, by 2013, Kepler lost two reaction wheels—essential for pointing the telescope—and had to abandon its primary mission after acquiring about four years’ worth of data. Kepler has good statistics on planets orbiting suns on Mercury-like orbits but not Earth-like. Bummer. (A few years ago Kepler was reincarnated in a new mode called K2, still finding planets but without hope of measuring NEarth.)
But I’m hoping for some super-habitable systems with 10, 20, or hundreds of potentially life-bearing planets.
The habitable zone is the ring around a star where the conditions are right for liquid water on a planet’s surface to exist. Those are the planets counted in NEarth. But different stars have different habitable zones: Those of red dwarf stars, cooler and fainter than our sun, are much closer; and those of brighter, hotter stars are farther out. Kepler succeeded in estimating NEarth for red dwarf stars since, for these stars, measuring Mercury-sized orbits works well enough: at least a sixth of red dwarf stars—and up to half—has an Earth-sized planet in the habitable zone. Not too shabby, red dwarfs!
A few months ago, astronomers announced the discovery of the spectacular TRAPPIST-1 7-planet system. Its central star is puny, just 8 percent the mass of our sun, 2,000 times fainter, and about the size of Jupiter. All seven planets are roughly Earth-sized, and they orbit extremely close to their star. The most exciting thing is, at least three (and perhaps up to four or five) live in the star’s habitable zone—if all stars had planetary systems like TRAPPIST-1, NEarth would be 3. Another twist is that, in the TRAPPIST-1 system, life—if there is any—may naturally spread between planets: The compact orbital setup is well-suited for panspermia. If an asteroid or comet hit any of the potentially life-bearing TRAPPIST-1 planets, some of the debris would scatter to the six, raining down space-born seeds.
It is remarkable to imagine NEarth being larger than one. It might make you wonder: Could there be super-TRAPPIST-1 systems out there with not three or four, but 10 or 20 planets in the habitable zone? What is the most planets a star can have there? We can answer this question precisely. Since we know how gravity works, and how orbits evolve, we have the tools we need to figure out the sardine-iest configuration of planets that can stably fit in a star’s habitable zone.
We need to choose what type of star we want (doesn’t matter much), and what size planets we’re interested in. Then we can break the problem down into two questions: First, how wide is the habitable zone? Second, how tightly can we pack planetary orbits?
The habitable zone is much more complicated than usually discussed. It depends on what a planet is made of, and the thickness and makeup of its atmosphere. According to models, Earth is near the inner edge of the sun’s habitable zone, which extends from 95 percent of Earth’s orbit out past Mars’ orbit (meaning that Earth, on Mars’ orbit, could retain its liquid water)! With a strongly heat-retaining atmosphere the outer edge of the habitable zone might be much farther away, and in some situations even free-floating planets in interstellar space could retain liquid water. However, in those cases life would be hidden beneath such a thick layer of gas (or ice) that we likely couldn’t detect it.
Planets’ orbits can be spaced in two different ways. The orbits of adjacent planets can be resonant, as is the case for the TRAPPIST-1 planets, a handful of other known systems, and Jupiter’s closest large moons. Or the planets can be out of resonance, as is the case for most of the known systems of super-Earths and our own solar system planets. Resonance simply means that the orbits of adjacent planets re-align periodically. Resonances are described by a ratio of integers. For example, a 2:1 resonance means that every time the outer planet completes one orbit the inner one has completed two.
There is a nice animated gif of Jupiter’s moons in resonance here. Planets spaced by resonances render their masses irrelevant. The spacing is simply determined by which resonances the planets are in. Resonances like 2:1 and 3:2 imply more widely spaced orbits than resonances like 7:6 or 9:8. Of course, not all resonances are stable. With TRAPPIST-1-like orbital spacing (3:2 resonances), four orbits fit comfortably within the habitable zone.
On the other hand, if planets are not spaced by resonances, then their masses do matter. Below is an example of maximal orbit-packing into the habitable zone for three different-mass planets. For Mars-mass planets (10 percent of Earth’s mass), 14 orbits fit within the habitable zone; but for Neptune-mass planets (around 10 times Earth’s mass) only three orbits fit.
Fourteen Mars-mass planets can fit in the habitable zone, but Mars (at least today) is a lifeless rock. To hold on to an atmosphere and to maintain plate tectonics for billions of years, a planet must be a little larger, arguably at least about 30 percent Earth’s mass. So planets about half of Earth’s mass are a good compromise between orbital spacing and life potential.
Here come two more twists. First, two planets can share the same orbit around the star! These are called Trojan pairs (and are not to be confused with the condoms). This almost doubles the number of planets that can fit on a given orbit.
The second twist is binary planets. Our moon is almost half the size of Earth, and Charon is almost as big as Pluto. It is entirely plausible to imagine two Earths orbiting each other. It would look something like this:
There are six stable orbits within the habitable zone. Each contains four planets: two binary Earths in Trojan configurations. This setup is stable and packs 24 planets within the habitable zone. Imagine the panspermia in this system! If life developed on any of the planets, the inevitable impact debris would certainly spread life across the whole system. This would be a pretty extreme system to form in nature, but all the pieces are completely plausible—and do happen. The trick is that they all need to happen in the same system.
What about Ultimate Solar System 2 or 3? It turns out that there are a bunch of variations on this theme. You can use planet formation theory to build planetary systems of all shapes and sizes. And this rabbit hole is deep (click here to see).
I’ll jump ahead to the grand finale. Using a couple of orbital dynamics tricks, I built a planetary system with 416 planets in the habitable zone.
This system is completely stable—I double-checked with computer simulations. But nature would have a tough time forming this system. If it exists, it could only have been built by a super-advanced civilization. That’s why I call it the Ultimate Engineered Solar System.
Imagine the stories you could tell in these Ultimate Solar Systems! Each binary planet has a close neighbor hovering larger than the moon in the sky. The night sky has an amazing wealth of wandering stars, the other planets tracing paths as they orbit the star.
Back to NEarth. We astronomers are happy to have measured that up to half of the stars in the Milky Way appear to host Earth-sized planets in the habitable zone (NEarth is up to 50 percent for red dwarfs, the dominant stars in the galaxy by number). TRAPPIST-1 is a great example that goes even further and packs three planets in the habitable zone. But I’m hoping for some super-habitable systems with 10, 20, or hundreds of potentially life-bearing planets. They are sure to be low probability systems, but with five hundred billion stars in our galaxy (and sci-fi fans crossing their fingers) it’s definitely worth looking!
The newest and most popular articles delivered right to your inbox!
WATCH: The astronomer Daniel Wolf Savin on the most practical astrophysics application. |
An interdisciplinary group of researchers has shown for the first time that it is feasible to determine the rate of change of gene transcript levels at a global level in animal embryos. They did this by directly measuring absolute numbers of mRNA molecules per embryo at closely spaced time points during development.
Transcription is the first step in gene expression, where the genes on our DNA are copied into molecules called messenger RNAs (mRNAs). mRNA molecules contain the instructions for making proteins – the number of mRNAs from a given gene is a measure of the level of expression of that gene.
The work is significant because it lays the groundwork for the development of quantitative models of animal development, enabling the use of mathematical tools more commonly associated with the physical sciences to be applied in biological studies.
The study, by a team from the Francis Crick Institute, the University of California, Irvine and the Yale University School of Medicine, was carried out in Xenopus frogs, a commonly used model animal in biology.
Mike Gilchrist of the Crick (currently based at Mill Hill) explained: “Development is a complex process, generating a functional and correctly scaled organism from a single cell – the fertilised egg. A quantitative model of development would have the potential to predict, for example, the consequences of having copies of gene variants which may be associated with disease, and would help us identify new genes that are critical for development.”
“Methods currently used for measuring gene expression generally rely on something called ‘relative normalisation’, which means that gene expression levels in a sample can only be estimated relative to other genes in the same sample. This can be misleading when comparing different samples, and in particular when making measurements over time. Real rates of change can only be determined from actual transcript numbers, and this gives us the kinetics of gene expression which we are interested in.”
Nick Owens, a post doc in Mike Gilchrist’s lab, who developed the computational analysis, said: “This study improves our ability to understand the way gene expression changes with time, and from this we gain insight into the logic of how gene expression is regulated. We find that gene expression during development is both remarkably dynamic and tightly controlled.”
“One outcome of our approach is that we find a characteristic timescale of changes in gene expression for each gene. Knowing whether a gene’s expression changes over minutes, hours or days has important implications for our understanding of the function of the gene. Short timescale genes shape development and long timescale genes manage the cellular machinery.”
Dr Gilchrist said: “This study suggests that we may improve significantly on the widely used analysis methods for determining gene expression levels from high throughput sequence data: absolute quantitation offers a much sounder basis for determining changes in gene expression level, a measure widely used to determine the consequence of genetic, chemical or physical disturbances in living systems.”
“A better understanding of development will have important implications for human health: failures in developmental processes that lead to congenital defects are currently the most common cause of infant mortality in the US and Europe. To understand the genetic causes of disease, we need to know which genes are involved in development, as well as when and where they act and how this changes with time. This work helps us to do this by providing the ‘when’ and by giving good estimates of actual transcript numbers and consequent transcription rates, measured over the whole embryo.”
The paper, Measuring Absolute RNA Copy Numbers at High Temporal Resolution Reveals Transcriptome Kinetics in Development, is published in Cell Reports. |
WHAT IS PrEP?
PrEP stands for “Pre-Exposure Prophylaxis”, meaning it is a tool to stop infection (“prophylaxis”) that is taken before you are exposed HIV. Current clinical guidelines for PrEP recommend daily use of the medication Truvada for maximum HIV prevention benefits.
PrEP is endorsed by the World Health Organization for people who are HIV negative and at risk for becoming positive. It has been shown to be as high as 99% effective at reducing risk for HIV when taken every day (without missing a dose). ——1.
PrEP works by disrupting HIV and preventing it from establishing a foothold in the body and causing a systemic infection.
PrEP does not protect against other
sexually transmitted infections,
and it is not a cure for HIV.
WHAT IS UNDETECTABLE
Viral Load is the amount of HIV present in a milliliter (mL) of blood. Viral load testing is a regular part of a health check-ups for people who are HIV-positive.
Undetectable refers to a viral load that is below levels that can be detected on current HIV viral load tests. This does not mean the virus is gone. In Ontario, this means that there is less than 40 copies of the HIV virus per mL of blood. |
Music in Western Civ Courses
Mark B. Tauger, November 2006
Harvey G. Cohen's stimulating article, "Music in the History Classroom" (Perspectives, December 2005), focused on song lyrics and the social and political importance of the composers and performers in U.S. history. The music itself, however, deserves attention, and can effectively be used in the classroom, especially in teaching Western Civilization courses. Though music is a highly abstract artistic medium, it still reflects its time and context. The development of Western classical music illustrates changes in society and intellectual life in unusual and illuminating ways that can usefully be explored in the history classroom.
Many historians (and students) do not have the background to understand and explain different musical styles. By reading one of the many introductory music texts, and then listening to a piece while following the musical score, ideally with a colleague from the music department, any historian can learn enough to communicate the crucial points. It is best to begin with Western music because it has all of the components necessary to understand other musical systems. The most important components are tonality, harmony and counterpoint, and form.
Tonality refers to the characteristic of music being in a particular "key," based on the 12-note scale that one finds on the piano. Western music expresses tonality through harmony and counterpoint.
Harmony is the system of consonances and dissonances in Western music, which derive from the acoustic structure of the overtone series. Counterpoint, or polyphony, is the use of multiple voices in Western music, ranging from independent melodic lines in orchestral music to melody and chords in popular songs.
Form is the relationship between the sections of a composition, and represents the peak of compositional achievement in Western music. The most important form in Western art music is the sonata, which was developed during the 18th century and is the basis for virtually all of the symphonies, chamber music, and other genres in the literature.
Western music did not begin with these components: they evolved over a thousand years in a process that corresponded to major shifts in Western intellectual history. In a survey course I use music to illustrate three of these shifts: the creation of tonality in the Renaissance and of the sonata in the Enlightenment, and the turn from tonality around 1900.
The Renaissance, as an intellectual movement, involved a return to the past. In music, however, it brought something very new. To show this, I play and explain a series of excerpts. I begin with the original Western music, which was monophonic, or consisting only of a single melody: Gregorian chant and secular music such as a troubadour song.
Then we hear the beginning of the new approach: Organum. In this music, composers—and we have some of the first identified composers in history with this development—used Gregorian chant as the basis and wrote new melodies to be sung above and below the chant at the same time, thereby creating the first counterpoint and harmony. This music (several recordings of which are available) is quite unusual and interesting, especially because this is where Western polyphony began, and virtually all Western music—classical, jazz, or rock—is polyphonic, with at least a melody and accompaniment.
Then the class hears a few examples of medieval music. Later composers began writing independent pieces—called "motets"—as settings of sacred and secular texts. Those of the 14th-century composer Guillaume de Machaut are good examples. One should listen to this music not only for its lyrical melodies but especially for its unusual harmonies. These composers did not have our sense of dissonance, but composers in Northern Europe in the late 14th–early 15th centuries tried to develop a different, smoother style, which culminated in the works of the first genuinely tonal composers, such as Josquin Despres. Just playing passages of 30 seconds from each of these composers can get this across. The smooth, consonant quality of Renaissance music derived from rules that these composers devised in writing counterpoint that student composers still study today.
The parallels between music and broader trends in Western history are often extremely suggestive. The breakthrough to consonant tonality was simultaneous with the development of one-point perspective in Renaissance art, and both tonality and perspective became "common practice" in their respective arts and remained so into the early 20th century. In a larger sense, the development of tonality corresponds to the humanists' rejection of medieval thought patterns.
The second turning point was the development of the sonata form in the 18th century. Here one can begin with a typical baroque composition such as an aria from Handel's Messiah. These pieces follow highly flexible forms, but always have a theme that recurs, and the composers use that recurrence for dramatic or emotional effect. The breakthrough to the sonata form was made by the early classical composers, especially Haydn and Mozart.
I explain to the class that the sonata was not a rigid form but rather a principle or template, a way in which composers would present and "develop" a series of musical themes in a manner that involved a movement away from and back to the original key. A typical sonata has three parts: (1) an exposition that presents themes (melodies and accompaniments); (2) a development in which the composer elaborates, combines, and alters these themes; and (3) a recapitulation that returns to the themes as in the exposition. The crucial aspect of this structure is the harmony. The exposition ordinarily introduces two main themes: the first is in the key of the sonata, but the second is in a different but related key; when these themes are brought back in the recapitulation, the second theme usually returns in the key of the sonata. Composers used this return to great effect: some of the most dramatic points in a Mozart opera ensemble (which are often in sonata form) are articulated by the music's return to the original key. Composers varied their articulation of every component of this form, making it extremely flexible and widely used: pieces ranging from a brief slow movement to an hour-long Strauss tone-poem are all sonatas.
To illustrate this principle it is sufficient to play one brief sonata piece all the way through; any short sonata by Haydn or Mozart will work, but the instructor must understand the piece and point out the sections while students listen.
Finally I contrast the classical style with the romantic, by playing something very emotional; a historian would of course prefer to use Chopin's "Revolutionary Étude" (op. 10 no. 12). The contrast between the simple, clear, highly structured, "rational" Haydn or Mozart, and the lush, dramatic, and formally vague Chopin piece (although it is also a sonata!) expresses vividly the contrast between the rationalism of the Enlightenment and the romantic social movements of the early 19th century. Chopin's étude was an expression of the romantic because it commemorated the 1831 Polish revolution.
The third shift, which instructors and students will find especially interesting, is the (apparently temporary) dissolution of tonality in the early 20th century. I begin by contrasting the work of a more conservative composer, Brahms's Handel variations, which exemplifies his adherence to classical forms, with Wagner's prelude to the opera Tristan und Isolde, which exemplifies the work of composers who experimented with tonality and form. The prelude's opening chord, the so-called Tristan chord, influenced many different composers for decades, despite Wagner's own German nationalist orientation.
Once students have a sense of late-romantic styles, one can illustrate some of the directions that music took. One is "Impressionist" music; for this the instructor could play Debussy's piano prelude "Voiles" to demonstrate the musical vocabulary of Impressionism. The resonances between this music and the paintings of such artists as Monet resemble those between Renaissance music and art in the 15th century.
Atonal music is another new development, and to illustrate this shift, I use excerpts from Schoenberg's Five Pieces for Orchestra or his string trio. Students can hear very clearly how different this music is from everything that came before. Atonal music was the standard for composers in American music schools for decades in the 20th century. This extreme musical modernism parallels the non-representational art of such expressionist painters as Kandinsky, who was influenced by Schoenberg.
Yet another direction was neoclassicism, a general term for composers who reacted against romanticism by reviving musical genres and styles from the 18th century or earlier, but with modernist harmonies and melodies. A famous example among this very diverse group would be Stravinsky, who began as an extreme modernist in his great ballets (Firebird, Petrushka, Rite of Spring), but then turned to baroque and classical forms. I usually play the beginning of The Rite of Spring to end these musical examples.
The parallels between music and other cultural and historical trends are less discernible in the 20th century than in the earlier cases. Still, I believe it is at least possible to hear in this music a reflection of the disintegration of Europe's secure bourgeois world under the impact of the Great War, revolution, fascism, and depression. If watching Ginger Rogers and Fred Astaire dance to Gershwin in 1935 distracted the audience from the dreary world outside, listening to Schoenberg or Stravinsky at the same time in a sense reflected that world.
This is not to minimize the value of popular music as a reflection of historical developments. Harvey Cohen's article suggested examples of this in the American context, and much can be done with European popular music as well. Similarly, non-Western music can also be used, not only to compare and contrast it with Western music, but also to explore connections between music and social history. That topic, however, is the theme of another opus.
—Mark Tauger is associate professor of history at West Virginia University. He would like to thank John Crotty, Mary Ferer, Gordon Nunn, and Chris Wilkinson of the College of Creative Arts at West Virginia University for helpful consultations. |
This lab assignment consisted of collecting rolly pollys or pillbugs from each students yard. We then experimented on the creatures by testing different hypothesis's about their habitats and environments.
My group was with Ariel and Dara. We wanted to test the moisture of the rolly polly's habitat and what drew them to the enviroments we discovered them in. Our hypothesis was that since the bugs were found underneath rocks and logs that they would prefer a moist habitat rather than a dry one.
To test this hypothesis we used habitat trays with two sections to compare to each other. We used two pieces of paper, one dry and one wet with tap water. We then placed 14 rolly pollys in the tray and waited approximately 10 minutes before beginning the test in order to let the bugs adapt to the new environment somewhat. This would allow them to try out both habitats and then make a decision. Our group then started recording data every minute, writing down how many bugs were in each habitat.
The first minute of data we recorded showed that all of the bugs prefered the dry habitat. However, the next couple minutes showed a split decision with about 9 or 8 bugs on the dry side and 5 or 6 on the wet. This data confused our group, but as we watched longer, we recorded data that stated that the bugs prefered the dry side the rest of the time.
During this experiment, we noticed that the rolly pollys grouped together with each other, almost as if they were huddling together to stay warm and sleeping. Because of this analization, our group believes that it is not the moisture of the environment that the bugs prefer, but rather the temperature or darkness. However, we ran out of time so we could not test either of these new hypotheses. |
PSHE education is a planned programme of learning through which children and young people acquire the knowledge, understanding and skills they need to manage their lives.
Personal, Social, Health and Economic (PSHE) education is a school subject through which pupils develop the knowledge, skills and attributes they need to manage their lives, now and in the future.
These skills and attributes help pupils to stay healthy, safe and prepare them for life and work in modern Britain. When taught well, PSHE education helps pupils to achieve their academic potential, and leave school equipped with skills they will need throughout later life.
PSHE is a key way that schools can ensure that pupils are receiving a wide and varied curriculum that is relevant to the lives they live today and prepares them for the future.
The National Curriculum (September, 2014) states that all schools:
- must provide a curriculum that is broadly based and balanced and which meets the needs of all pupils
- promote the spiritual, moral, social, cultural, mental and physical development of pupils at the school and society, and prepare pupils at the school for the opportunities, responsibilities
and experiences of later life
- should make provision for personal, social, health and economic education (PSHE) drawing on good practice.
The school follows the ‘You, Me, PSHE’ scheme of work. It provides clear and progressive PSHE curriculum which can be used as given.
We have divided PSHE into 7 different strands:
- Sex and relationship education (SRE)
- Drug, alcohol and tobacco education (DATE)
- Keeping safe and managing risk
- Mental health and emotional wellbeing
- Physical health and wellbeing
- Careers, financial capability & economic wellbeing
- Identity, society and equality
Within each strand, there are age appropriate topics for the different year groups. One topic per half term is taught to each year group (SRE is taught over a whole term in Y2, 4 and 6). Each topic consists of three lessons per half term and teachers are expected to teach a minimum of three lessons in each half term. Each PSHE lesson includes an overall learning intention and specific learning outcomes (based on knowledge and understanding, skills and the development of attitudes).
Pupil progress is assessed at the end of the unit of work, using the pupil self assessment sheets which have been developed to match each topic. These are based on a simple draw or write method and aim to assess the knowledge and skills acquired throughout the topic.
Please click the document below to view our ‘You, Me, PSHE’ Scheme of Work Curriculum Overview and Year Group Overview
What is PSHE education?
Personal, social, health and economic (PSHE) education and citizenship education are both planned parts of the school curriculum that will also be reflected through the whole school experience. They equip pupils with knowledge, understanding and skills and help them to explore and develop attitudes and values.
PSHE education supports the development of personal, social and life skills: the identification of and dealing with emotions and feelings, exploring health-related issues, understanding about oneself, relationships with others and one’s place in the world, learning about managing finances, the world of work and planning for the future. It supports pupils to lead healthy, safe, fulfilled and responsible lives.
Citizenship education prepares pupils for the social and moral responsibilities of community involvement: the understanding of democracy and justice, rights and responsibilities and exploring identities and diversity. It helps them develop political literacy and to become informed, critical, active citizens who have the skills, confidence and conviction to advocate, take action and try to make a difference in their local, national and global communities.
Message from our PSHE Leader:
As a member of our school community, you will know how much we rely on you for your regular contributions to our cake sales. Funds are shared equally between a class’ chosen charity, resources for your child’s classroom as well as contributing to the variety of educational visits your child enjoys each and every term.
Here at St Mary’s we recognise that a child’s health and wellbeing is the result of a team effort between parents, teachers, and the community.
“A Healthy School is a school which actively seeks to promote and improve the health and wellbeing of the whole school community through all aspects of school life, so that pupils are enabled to maximise and enrich their aspirations, levels of attainment and personal development.”
Please help us to help children to make healthy choices, by ensuring that we have a selection of healthy treats on offer at your child’s class bake sale. Yummy cakes, cookies, fruit salads and breads with a healthy twist are most welcome!
Each class will be in charge of the bake sale for 4 weeks. Parents are very welcome to come and sign up to help run the bake sale, set up and tidy away. The success of a bake sale depends of course on you, our valuable parents, and your contributions. Each class will in turn get an opportunity to see who can raise the most funds. See your classteacher if you would like to help.
We appreciate your support in making St. Mary’s a healthy and successful school.
For further information please contact our Curriculum Leader Ashan Venn on 020 7359 1870 or email at [email protected] |
Literature Circles – a structured method of delivering Reading for Enjoyment, whilst teaching learners how to think, be accountable for their learning, in addition to embracing the embedded CfE principle of ‘Enjoyment and Choice’. Click on the blue links throughout to find out more.
What are Literature Circles?
Literature circles are formed within your classroom, allowing pupils to choose the book they read, from a selection of 3/4 books. Groups are then established from their choice of book. Each week (it is suggested you have 1 session a week) pupils decide as a group how much they read for the next session. Ensuring the session has a structure – giving pupils roles and responsibilities – learners are able to had an in-depth discussion in which they are all accountable for.
Strathclyde University, on behalf of the Scottish Government, conducted research (2005) on the effectiveness of literature circles within the classroom. They found that:
1. Literature circles encouraged learners to take responsibility for their own learning
2. Learners, using the roles given, were able to analyse the text
3. Boys were increasingly engaged in reading for enjoyment
4. Pupils wanted to create their own literature circles
Literature Circles, Gender and Reading for Enjoyment
Within my classroom, in conjunction with the use of Bloom’s taxonomy, I have found literature circles an active way of promoting leadership of learners within the classroom, putting an emphasis on pupil talk over teacher talk.
Have you used literature circles in your classroom? If so, we’d love to hear your advice on how to perfect literature circles. |
The planet Mars has few things in common. Both planets have roughly the same amount of land surface area, sustained polar caps, and both have a similar tilt in their rotational axes, affording each of them strong seasonal variability. Additionally, both planets present strong evidence of having undergone climate change in the past. In Mars’ case, this evidence points towards it once having a viable atmosphere and liquid water on its surface.
At the same time, our two planets are really quite different, and in a number of very important ways. One of these is the fact that gravity on Mars is just a fraction of what it is here on Earth. Understanding the effect this will likely have on human beings is of extreme importance when it comes time to send crewed missions to Mars, not to mention potential colonists.
Mars Compared to Earth:
The differences between Mars and Earth are all crucial for the existence of life as we know it. For instance, atmospheric pressure on Mars is a tiny fraction of what it is here on Earth – averaging 7.5 millibars on Mars to just over 1000 here on Earth. The average surface temperature is also lower on Mars, ranking in at a frigid -63 °C compared to Earth’s balmy 14 °C.
And while the length of a Martian day is roughly the same as it is here on Earth (24 hours 37 minutes), the length of a Martian year is significantly longer (687 days). On top that, the gravity on Mars’ surface is much lower than it is here on Earth – 62% lower to be precise. At just 0.376 of the Earth standard (or 0.376 g), a person who weighs 100 kg on Earth would weigh only 38 kg on Mars.
This difference in surface gravity is due to a number of factors – mass, density, and radius being the foremost. Even though Mars has almost the same land surface area as Earth, it has only half the diameter and less density than Earth – possessing roughly 15% of Earth’s volume and 11% of its mass.
Calculating Martian Gravity:
Scientists have calculated Mars’ gravity based on Newton’s Theory of Universal Gravitation, which states that the gravitational force exerted by an object is proportional to its mass. When applied to a spherical body like a planet with a given mass, the surface gravity will be approximately inversely proportional to the square of its radius. When applied to a spherical body with a given average density, it will be approximately proportional to its radius.
These proportionalities can be expressed by the formula g = m/r2, where g is the surface gravity of Mars (expressed as a multiple of the Earth’s, which is 9.8 m/s²), m is its mass – expressed as a multiple of the Earth’s mass (5.976·1024 kg) – and r its radius, expressed as a multiple of the Earth’s (mean) radius (6,371 km).
For instance, Mars has a mass of 6.4171 x 1023 kg, which is 0.107 times the mass of Earth. It also has a mean radius of 3,389.5 km, which works out to 0.532 Earth radii. The surface gravity of Mars can therefore be expressed mathematically as: 0.107/0.532², from which we get the value of 0.376. Based on the Earth’s own surface gravity, this works out to an acceleration of 3.711 meters per second squared.
At present, it is unknown what effects long-term exposure to this amount of gravity will have on the human body. However, ongoing research into the effects of microgravity on astronauts has shown that it has a detrimental effect on health – which includes loss of muscle mass, bone density, organ function, and even eyesight.
Understanding Mars’ gravity and its affect on terrestrial beings is an important first step if we want to send astronauts, explorers, and even settlers there someday. Basically, the effects of long-term exposure to gravity that is just over one-third the Earth normal will be a key aspect of any plans for upcoming manned missions or colonization efforts.
For example, crowd-sourced projects like Mars One make allowances for the likelihood of muscle deterioration and osteoporosis for their participants. Citing a recent study of International Space Station (ISS) astronauts, they acknowledge that mission durations ranging from 4-6 months show a maximum loss of 30% muscle performance and maximum loss of 15% muscle mass.
Their proposed mission calls for many months in space to get to Mars, and for those volunteering to spend the rest of their lives living on the Martian surface. Naturally, they also claim that their astronauts will be “well prepared with a scientifically valid countermeasures program that will keep them healthy, not only for the mission to Mars, but also as they become adjusted to life under gravity on the Mars surface.” What these measures are remains to be seen.
Learning more about Martian gravity and how terrestrial organisms fare under it could be a boon for space exploration and missions to other planets as well. And as more information is produced by the many robotic lander and orbiter missions on Mars, as well as planned manned missions, we can expect to get a clearer picture of what Martian gravity is like up close.
As we get closer to NASA’s proposed manned mission to Mars, which is currently scheduled to take place in 2030, we can certainly expect that more research efforts will be attempted.
NASA’s Mars Exploration Program has accomplished some truly spectacular things in the past few decades. Officially launched in 1992, this program has been focused on three major goals: characterizing the climate and geology of Mars, looking for signs of past life, and preparing the way for human crews to explore the planet.
And in the coming years, the Mars 2020 rover will be deployed to the Red Planet and become the latest in a long line of robotic rovers sent to the surface. In a recent press release, NASA announced that it has awarded the launch services contract for the mission to United Launch Alliance (ULA) – the makers of the Atlas V rocket.
The mission is scheduled to launch in July of 2020 aboard an Atlas V 541 rocket from Cape Canaveral in Florida, at a point when Earth and Mars are at opposition. At this time, the planets will be on the same side of the Sun and making their closest approach to each other in four years, being just 62.1 million km (38.6 million miles) part.
Following in the footsteps of the Curiosity, Opportunity andSpirit rovers, the goal of Mars 2020 mission is to determine the habitability of the Martian environment and search for signs of ancient Martian life. This will include taking samples of soil and rock to learn more about Mars’ “watery past”.
But whereas these and other members of the Mars Exploration Program were searching for evidence that Mars once had liquid water on its surface and a denser atmosphere (i.e. signs that life could have existed), the Mars 2020 mission will attempt to find actual evidence of ancient microbial life.
The design of the rover also incorporates several successful features of Curiosity. For instance, the entire landing system (which incorporates a sky crane and heat shield) and the rover’s chassis have been recreated using leftover parts that were originally intended for Curiosity.
There’s also the rover’s radioisotope thermoelectric generator – i.e. the nuclear motor – which was also originally intended as a backup part for Curiosity. But it will also have several upgraded instrument on board that allow for a new guidance and control technique. Known as “Terrain Relative Navigation”, this new landing method allows for greater maneuverability during descent.
Another new feature is the rover’s drill system, which will collect core samples and store them in sealed tubes. These tubes will then be left in a “cache” on the surface, where they will be retrieved by future missions and brought back to Earth – which will constitute the first sample-return mission from the Red Planet.
In this respect, Mars 2020 will help pave the way for a crewed mission to the Red Planet, which NASA hopes to mount sometime in the 2030s. The probe will also conduct numerous studies designed to improve landing techniques and assess the planet’s natural resources and hazards, as well as coming up with methods to allow astronauts to live off the environment.
In terms of hazards, the probe will be looking at Martian weather patterns, dust storms, and other potential environmental conditions that will affect human astronauts living and working on the surface. It will also test out a method for producing oxygen from the Martian atmosphere and identifying sources of subsurface water (as a source of drinking water, oxygen, and hydrogen fuel).
As NASA stated in their press release, the Mars 2020 mission will “offer opportunities to deploy new capabilities developed through investments by NASA’s Space Technology Program and Human Exploration and Operations Mission Directorate, as well as contributions from international partners.”
They also emphasized the opportunities to learn ho future human explorers could rely on in-situ resource utilization as a way of reducing the amount of materials needed to be shipped – which will not only cut down on launch costs but ensure that future missions to the planet are more self-reliant.
The total cost for NASA to launch Mars 2020 is approximately $243 million. This assessment includes the cost of launch services, processing costs for the spacecraft and its power source, launch vehicle integration and tracking, data and telemetry support.
The use of spare parts has also meant reduced expenditure on the overall mission. In total, the Mars 2020 rover and its launch will cost and estimated $2.1 billion USD, which represents a significant savings over previous missions like the Mars Science Laboratory – which cost a total of $2.5 billion USD.
The International Space Station has provided astronauts and space agencies with immense opportunities for research during the decade and a half that it has been in operation. In addition to studies involving meteorology, space weather, materials science, and medicine, missions aboard the ISS has also provided us with valuable insight into human biology.
For example, studies conducted aboard the ISS’ have provided us with information about the effects of long-term exposure to microgravity. And all the time, astronauts are pushing the limits of how long someone can healthily remain living under such conditions. One such astronauts is Jeff Williams, the Expedition 48 commander who recently established a new record for most time spent in space.
This record-breaking feat began back in 2000, when Williams spent 10 days aboard the Space Shuttle Atlantis for mission STS-101. At the time, the International Space Station was still under construction, and as the mission’s flight engineer and spacewalker, Williams helped prepare the station for its first crew.
This was followed up in 2006, where Williams’ served as part of Expedition 13 to the ISS. The station had grown significantly at this point with the addition of Russian Zvezda service module, the U.S. Destiny laboratory, and the Quest airlock. Numerous science experiments were also being conducted at this time, which included studies into capillary flow and the effects of microgravity on astronauts’ central nervous systems.
During the six months he was aboard the station, Williams was able to get in two more spacewalks, set up additional experiments on the station’s exterior, and replaced equipment. Three years later, he would return to the station as part of Expedition 21, then served as the commander of Expedition 22, staying aboard the station for over a year (May 27th, 2009 to March 18th, 2010).
By the time Expedition 48’s Soyuz capsule launched to rendezvous with the ISS on July 7th, 2016, Williams had already spent more than 362 days in space. By the time he returns to Earth on Sept. 6th, he will have spent a cumulative total of 534 days in space. He will have also surpassed the previous record set by Scott Kelly, who spent 520 days in space over the course of four missions.
On Wednesday, August 24th, the International Space Station raised its orbit ahead of Williams’ departure. Once he and two of his mission colleagues – Oleg Skripochka and Alexey Ovchinin – undock in their Soyuz TMA-20M spacecraft, they begin their descent towards Kazakhstan, arriving on Earth roughly three and a half hours later.
Former astronaut Scott Kelly was a good sport about the passing of this record, congratulating Williams in a video created by the Johnson Space Center (see below). Luckily, Kelly still holds the record for the longest single spaceflight by a NASA astronaut – which lasted a stunning 340 days.
And Williams may not hold the record for long, as astronaut Peggy Whitson is scheduled to surpass him in 2017 during her next mission (which launches this coming November). And as we push farther out into space in the coming years, mounting missions to NEOs and Mars, this record is likely to be broken again and again.
In the meantime, Williams and his crew will continue to dedicate their time to a number of crucial experiments. In the course of this mission, they have conducted research into human heart function, plant growth in microgravity, and executed a variety of student-designed experiments.
Like all research conducted aboard the ISS, the results of this research will be used to improve health treatments, have numerous industrial applications here on Earth, and will help NASA plan mission farther into space. Not the least of which will be NASA’s proposed (and rapidly approaching) crewed mission to Mars.
In addition to spending several months in zero-g for the sake of the voyage, NASA will need to know how their astronauts will fair when conducting research on the surface of Mars, where the gravity is roughly 37% that of Earth (0.376 g to be exact).
And be sure to enjoy this video of Scott Kelly congratulating Williams on his accomplishment, courtesy of the Johnson Space Center:
Its an Epic Rocket Battle! Or a Clash of the Titans, if you will. Except that in this case, the titans are the two of the heaviest rockets the world has ever seen. And the contenders couldn’t be better matched. On one side, we have the heaviest rocket to come out of the US during the Space Race, and the one that delivered the Apollo astronauts to the Moon. On the other, we have the heaviest rocket created by the NewSpace industry, and which promises to deliver astronauts to Mars.
And in many respects, the Falcon Heavy is considered to be the successor of the Saturn V. Ever since the latter was retired in 1973, the United States has effectively been without a super-heavy lifter. And with the Space Launch System still in development, the Falcon Heavy is likely to become the workhorse of both private space corporations and space agencies in the coming years.
So let’s compare these two rockets, taking into account their capabilities, specifications, and the history of their development and see who comes out on top. BEGIN!
The development of the Saturn V began in 1946 with Operation Paperclip, a US government program which led to the recruitment of Wernher von Braun and several other World War II-era German rocket scientists and technicians. The purpose of this program was to leverage the expertise of these scientists to give the US an edge in the Cold War through the development of intercontinental ballistic missiles (ICBMs).
Between 1945 and the mid-to-late 50s von Braun acted as an advisor to US armed forces for the sake of developing military rockets only. It was not until 1957, with the Soviet launch of Sputnik-1 using an R-7 rocket – a Soviet ICBM also capable of delivering thermonuclear warheads – that the US government began to consider the use of rockets for space exploration.
Thereafter, von Braun and his team began developing the Jupiter series of rockets – a modified Redstone ballistic missile with two solid-propellant upper stages. These proved to be a major step towards the Saturn V, hence why the Jupiter series was later nicknamed “an infant Saturn”. Between 1960 and 1962, the Marshall Space Flight Center began designing the rockets that would eventually be used by the Apollo Program.
After several iterations, the Saturn C-5 design (later named the Saturn V) was created. By 1964, it was selected for NASA’s Apollo Program as the rocket that would conduct a Lunar Orbit Rendezvous (LRO). This plan called for a large rocket to launch a single spacecraft to the Moon, but only a small part of that spacecraft (the Lunar Module) would actually land on the surface. That smaller module would then rendezvous with the main spacecraft – the Command/Service Module (CSM) – in lunar orbit and the crew would return home.
Development of the Falcon Heavy was first announced in 2011 at the National Press Club in Washington D.C. In a statement, Musk drew direct comparisons to the Saturn V, claiming that the Falcon Heavy would deliver “more payload to orbit or escape velocity than any vehicle in history, apart from the Saturn V moon rocket, which was decommissioned after the Apollo program.”
Consistent with this promise of a “super heavy-lift” vehicle, SpaceX’s original specifications indicated a projected payload of 53,000 kg (117,000 lbs) to Low-Earth Orbit (LEO), and 12,000 kgg (26,000 lbs) to Geosynchronous Transfer Orbit (GTO). In 2013, these estimates were revised to 54,400 kg (119,900 lb) to LEO and 22,200 kg (48,900 lb) to GTO, as well as 16,000 kilograms (35,000 lb) to translunar trajectory, and 13,600 kilograms (31,000 lb) on a trans-Martian orbit to Mars, and 2,900 kg (6,400 lb) to Pluto.
In 2015, the design was changed – alongside changes to the Falcon 9 v.1.1 – to take advantage of the new Merlin 1D engine and changes to the propellant tanks. The original timetable, proposed in 2011, put the rocket’s arrival at SpaceX’s west-coast launch location – Vandenberg Air Force Base in California – at before the end of 2012.
The first launch from Vandenberg was take place in 2013, while the first launch from Cape Canaveral was to take place in late 2013 or 2014. But by mid-2015, delays caused by failures with Falcon 9 test flights caused the first launch to be pushed to late 2016. The rocket has also been relocated to the Kennedy Space Center Launch Complex in Florida.
SpaceX also announced in July 0f 2016 that it planned to expand its landing facility near Cape Canaveral to take advantage of the reusable technology. With three landing pads now planned (instead of one on land and a drone barge at sea), they hope to be able to recover all of the spent boosters that will be used for the launch of a Falcon Heavy.
Both the Saturn V and Falcon Heavy were created to do some serious heavy lifting. Little wonder, since both were created for the sole purpose of “slipping the surly bonds” of Earth and putting human beings and cargo onto other celestial bodies. For its part, the Saturn V‘s size and payload surpassed all other previous rockets, reflecting its purpose of sending astronauts to the Moon.
With the Apollo spacecraft on top, it stood 111 meters (363 feet) tall and was 10 meters (33 feet) in diameter, without fins. Fully fueled, the Saturn V weighed 2,950 metric tons (6.5 million pounds), and had a payload capacity estimated at 118,000 kg (261,000 lbs) to LEO, but was designed for the purpose of sending 41,000 kg (90,000 lbs) to Trans Lunar Insertion (TLI).
Later upgrades on the final three missions boosted that capacity to 140,000 kg (310,000 lbs) to LEO and 48,600 kg (107,100 lbs) to the Moon. The Saturn V was principally designed by NASA’s Marshall Space Flight Center in Huntsville, Alabama, while numerous subsystems were developed by subcontractors. This included the engines, which were designed by Rocketdyne, a Los Angeles-based rocket company.
The first stage (aka. S-IC) measured 42 m (138 feet) tall and 10 m (33 feet) in diameter, and had a dry weight of 131 metric tons (289,000 lbs) and a total weight of over 2300 metric tons (5.1 million lbs) when fully fueled. It was powered by five Rocketdyne F-1 engines arrayed in a quincunx (four units arranged in a square, and the fifth in the center) which provided it with 34,000 kN (7.6 million pounds-force) of thrust.
The Saturn V consisted of three stages – the S-IC first stage, S-II second stage and the S-IVB third stage – and the instrument unit. The first stage used Rocket Propellant-1 (RP-1), a form of kerosene similar to jet fuel, while the second and third stages relied on liquid hydrogen for fuel. The second and third stage also used solid-propellant rockets to separate during launch.
The Falcon Heavy is based around a core that is a single Falcon 9 with two additional Falcon 9 first stages acting as boosters. While similar in concept to the Delta IV Heavy launcher and proposals for the Atlas V HLV and Russian Angara A5V, the Falcon Heavy was specifically designed to exceed all current designs in terms of operational flexibility and payload. As with other SpaceX rockets, it was also designed to incorporate reusability.
The rocket relies on two stages, with the possibility of more to come, that measure 70 m (229.6 ft) in height and 12.2 m (39.9 ft) in width. The first stage is powered by three Falcon 9 cores, each of which is equipped with nine Merlin 1D engines. These are arranged in a circular fashion with eight around the outside and one in th middle (what SpaceX refers to as the Octaweb) in order to streamline the manufacturing process. Each core also includes four extensible landing legs and grid fins to control descent and conduct landings.
The first stage of the Falcon Heavy relies on Subcooled LOX (liquid oxygen) and chilled RP-1 fuel; while the upper stage also uses them, but under normal conditions. The Falcon Heavy has a total sea-level thrust at liftoff of 22,819 kN (5,130,000 lbf) which rises to 24,681 kN (5,549,000 lbf) as the craft climbs out of the atmosphere. The upper stage is powered by a single Merlin 1D engine which has a thrust of 34 kN (210,000 lbf) and has been modified for use in a vacuum.
Although not a part of the initial Falcon Heavy design, SpaceX has been extending its work with reusable rocket systems to ensure that the boosters and core stage can be recovered. Currently, no work has been announced on making the upper stages recoverable as well, but recent successes recovering the first stages of the Falcon 9 may indicate a possible change down the road.
The consequence of adding reusable technology will mean that the Falcon Heavy will have a reduced payload to GTO. However, it will also mean that it will be able to fly at a much lower cost per launch. With full reusability on all three booster cores, the GTO payload will be approximately 7,000 kg (15,000 lb). If only the two outside cores are reusable while the center is expendable, the GTO payload would be approximately 14,000 kg (31,000 lb).
The Saturn V rocket was by no means a small investment. In fact, one of the main reasons for the cancellation of the last three Apollo flights was the sheer cost of producing the rockets and financing the launches. Between 1964 and 1973, a grand total of $6.417 billion USD was appropriated for the sake of research, development, and flights.
Adjusted to 2016 dollars, that works out to $41.4 billion USD. In terms of individual launches, the Saturn V would cost between $185 and $189 million USD, of which $110 million was spent on production alone. Adjusted for inflation, this works out to approximately $1.23 billion per launch, of which $710 million went towards production.
By contrast, when Musk appeared before the US Senate Committee on Commerce, Science and Transportation in May 2004, he stated that his ultimate goal with the development of SpaceX was to bring the total cost per launch down to $1,100 per kg ($500/pound). As of April 2016, SpaceX has indicated that a Falcon Heavy could lift 2268 kg (8000 lbs) to GTO for a cost of $90 million a launch – which works out to $3968.25 per kg ($1125 per pound).
No estimates are available yet on how a fully-reusable Falcon Heavy will further reduce the cost of individual launches. And again, it will vary depending on whether or not the boosters and the core, or just the external boosters are recoverable. Making the upper stage recoverable as well will lead to a further drop in costs, but will also likely impact performance.
So having covered their backgrounds, designs and overall cost, let’s move on to a side-by-side comparison of these two bad boys. Let’s see how they stack up, pound for pound, when all things are considered – including height, weight, lift payload, and thrust.
110.6 m (363 ft)
70 m (230 ft)
10.1 m (33 ft)
12.2 m (40 ft)
5 Rocketdyne F-1
3 x 9 Merlin 1D
5 Rocketdyne J-2
1 Merlin 1D
1 Rocketdyne J-2
22,918 kN (sea level);
24,681 kN (vacuum)
When put next to each other, you can see that the Saturn V has the advantage when it comes to muscle. It’s bigger, heavier, and can deliver a bigger payload to space. On the other hand, the Falcon Heavy is smaller, lighter, and a lot cheaper. Whereas the Saturn V can put a heavier payload into orbit, or send it on to another celestial body, the Falcon Heavy could perform several missions for every one mounted by its competitor.
But whereas the contributions of the venerable Saturn V cannot be denied, the Falcon Heavy has yet to demonstrate its true worth to space exploration. In many ways, its like comparing a retired champion to an up-and-comer who, despite showing lots of promise and getting all the headlines, has yet to win a single bout.
But should the Falcon Heavy prove successful, it will likely be recognized as the natural successor to the Saturn V. Ever since the latter was retired in 1973, NASA has been without a rocket with which to mount long-range crewed missions. And while heavy-lift options have been available – such as the Delta IV Heavy and Atlas V – none have had the performance, payload capacity, or the affordability that the new era of space exploration needs.
In truth, this battle will take several years to unfold. Only after the Falcon Heavy is rigorously tested and SpaceX manages to deliver on their promises of cheaper space launches, a return to the Moon and a mission to Mars (or fail to, for that matter) will we be able to say for sure which rocket was the true champion of human space exploration! But in the meantime, I’m sure there’s plenty of smack talk to be had by fans of both! Preferably in a format that rhymes!
When your stated purpose is to send settlers to Mars by 2026, you’re sure to encounter a lot of skepticism. And that is exactly what Dutch entrepreneur Bas Lansdorp has been dealing with ever since he first went public with MarsOne in 2012. In fact, in the past four years, everything from the project’s schedule, technical and financial feasibility, and ethics have been criticized by scientists, engineers and people in the aerospace industry.
However, Lansdorp and his organization have persevered, stating that they intend to overcome all the challenges in sending people on a one-way trip to the Red Planet. And in their most recent statement, MarsOne has announced that they have addressed the all-important issue of what their settlers will eat. In an experiment that feels like it was ripped from the The Martian, MarsOne has completed testing different types of crops in simulated Martian soil, to see which ones could grow on Mars.
Located in the Dutch town of Nergena, MarsOne maintains a glasshouse complex where they have been conducting experiments. These experiments took place in 2013 and 2015, and involved Martian and Lunar soil simulants provided by NASA, along with Earth soil as a control group.
Using these, a team of ecologists and crop scientists from the Wageningen University & Research Center have been testing different kinds of seeds to see which ones will grow in a Lunar and Martian environment. These have included rye, radishes, garden cress and pea seed. And earlier this year, they added a crop of tomatoes and potatoes to the mix.
As Dr. Wieger Wamelink, the ecologist who led the experiments, told Universe Today via email:
“We started our first experiment in 2013 (published in Plos One in 2014) to investigate if it was possible to grow plants in Mars and moon soil simulants. We assume that plants will be grown indoors, because of the very harsh circumstances on both Mars and moon, very cold, no or almost no atmosphere and way to much cosmic radiation. That first experiment only had a few crops and mostly wild plants and clovers (for nitrogen binding from the atmosphere to manure the soil).”
After confirming that the seeds would germinate in the simulated soil after the first year, they then tested to see if the seeds from that harvest would germinate in the same soil to create another harvest. What they found was quite encouraging. In all four cases, the seeds managed to germinate nicely in both Martian and Lunar soil.
“Our expectation were very low,” said Wamelink, “so we were very surprised that on the Mars soil simulant plants grew rather well and even better than on our nutrient poor control earth soil. There were also problems, the biggest that it was very difficult to keep the soil moist and that though on Mars soil simulant there was growth it was not very good, i.e. the amount of biomass formed was low.”
And while they didn’t grow as well as the control group, which was grown in Earth soil, they did managed to produce time and again. This was intrinsic to the entire process, in order to make sure that any crops grown on Mars would have a full life-cycle. Being able to grow crops, replant seeds, and grow more would eliminate the need to bring new seeds for every crop cycle, thus ensuring that Martian colonists could be self-sufficient when it came to food.
In 2015, they conducted their second experiment. This time around, after planting the seeds in the simulated soil, they added organic matter to simulate the addition of organic waste from a previous crop cycle. And on every Friday, when the experiments were running, they added nutrient solution to mimic the nutrients derived from fecal matter and urine (definite echoes of The Martian there!).
Once again, the results were encouraging. Once again, the crops grew, and the addition or organic matter improved the soil’s water-holding capacity. Wamelink and his team were able to harvest from many of the ten crops they had used in the experiment, procuring another batch of radishes, tomatoes and peas. The only crop that did poorly was the batch of spinach they had added.
This year, the team’s experiments were focused on the issue of food safety. As any ecologist knows, plants naturally absorb minerals from their surrounding environment. And tests have shown that soils obtained from the Moon and Mars show concentrations of heavy metals and toxins – such as arsenic, cadmium, copper, lead, and iron (which is what gives Mars its reddish appearance). As Wamelink described the process:
“Again we have ten crops, but slightly different crops from last year; we included green beans and potatoes (best food still and Mark Watney also seems to love potatoes). Also repeated was the addition of organic matter, to mimic the addition of the plant parts that are not eaten from a previous growth cycle. Also new is the addition of liquid manure, to mimic the addition of human faeces… We know that both Mars and moon soil simulants contain heavy metals, like led, copper, mercury and chrome. The plants do not care about this, however when they end up in the eaten parts then they could poison the humans that eat them. There we have to test if it is safe to eat them.”
And again, the results were encouraging. In all cases, the crops showed that the concentrations of metals they contained were within human tolerances and therefore safe to eat. In some cases, the metal concentrations were even lower than that found those grown using potting soil.
“We now tested four species we harvested last year as a preliminary investigation and it shows that luckily there are no harmful quantities present in the fruits, so it is safe to eat them,” said Wamelink. “We will continue these analyses, because for the FDA they have to be analysed in fresh fruits and vegetables, where we did the analyses on dried material. Moreover we will also look at the content of large molecules, like vitamins, flavonoids (for the taste) and alkaloids (for toxic components).”
However, the Wageningen UR team hopes to test all ten of the crops they have grown in order to make sure that everything grown in Martian soil will be safe to eat. Towards this end, Wageningen UR has set up a crowdfunding campaign to finance their ongoing experiments. With public backing, they hope to show that future generations will be able to be self-sufficient on Mars, and not have to worry about things like arsenic and lead poisoning.
As an incentive, donors will receive a variety of potential gifts, which include samples of the soil simulant used for the experiment. But the top prize, a a dinner based on the harvest, is being offered to people contributing €500 ($555.90 USD) or more. In what is being called the first “Martian meal” this dinner will take place once the experiment is complete and will of course include Martian potatoes!
Looking ahead, Wamelink and his associates also hope to experiment crops that do not rely on a seed-to-harvest cycle, and are not harvested annually.These include fruit trees so that they might be able to grow apples, cherries, and strawberries in Martian soil. In addition, Wamelink has expressed interest in cultivating lupin seeds as a means of replacing meat in the Martian diet.
And when it comes right down to it, neither MarsOne or the Wageningen UR team are alone in wanting to see what can be grown on Mars or other planets. For years, NASA has also been engaged in their own tests to see which crops can be cultivated on Mars. And with the help of the Lima-based International Potato Center, their latest experiment involves cultivating potatoes in samples of Peruvian soil.
For hundreds of years, the Andean people have been cultivating potatoes in the region. And given the arid conditions, NASA believes it will serve as a good facsimile for Mars. But perhaps the greatest draw is the fact cultivating potatoes in a simulated Martian environment immediately calls to mind Matt Damon in The Martian. In short, it’s a spectacular PR move that NASA, looking to drum up support for its “Journey to Mars“, cannot resist!
Naturally, experiments such as these are not just for the sake of meeting the challenges posed by MarsOne’s plan for one-way crewed missions to Mars. Alongside the efforts of NASA and others, they are part of a much larger effort to address the challenges posed by the renewed era of space exploration we find ourselves embarking on.
With multiple space agencies and private corporations (like SpaceX) hoping to put buts back on the Moon and Mars, and to establish permanent bases on these planets and even in the outer Solar System, knowing what it will take for future generations of colonists and explorers to sustain themselves is just good planning.
Since the Authorization Act of 2010, NASA has been pushing ahead with the goal of sending astronauts to Mars by the 2030s. The latter part of this goal has been the subject of much attention in recent years, and for good reason. Sending crewed missions to the Red Planet would be the single-greatest initiative undertaken since the Apollo era, and the rewards equally great.
However, with the scheduled date for a mission approaching, and the upcoming presidential election, NASA is finding itself under pressure to show that they are making headway. Despite progress being made with both the Space Launch System (SLS) and the Orion Multi-Purpose Crew Vehicle, there are lingering issues which need to be worked out before NASA can mount its historic mission to Mars.
One of the biggest issues is that of assigned launched missions that will ensure that the SLS is tested many times before a crewed mission to Mars is mounted. So far, NASA has produced some general plans as part of it’s “Journey to Mars“, an important part of which is the use of the SLS and Orion spacecraft to send a crew beyond low-Earth orbit and explore a near-Earth asteroid by 2025.
This plan is not only intended to provide their astronauts with experience working beyond LEO, but to test the SLS and Orion’s capabilities, not to mention some vital systems – such as Solar Electric Propulsion (SEP), which will be used to send cargo missions to Mars. Another major step is Exploration Mission 1 (EM-1), the first planned flight of the SLS and the second uncrewed test flight of the Orion spacecraft (which will take place on September 30th, 2018).
However, beyond this, NASA has only one other mission on the books, which is Exploration Mission 2 (EM-2). This mission will involve the crew performing a practice flyby of a captured asteroid in lunar orbit, and which is scheduled for launch in 2023. This will be the first crewed test of the Orion spacecraft, and also the first time American astronauts have left low-Earth orbit since the Apollo 17 mission in 1972.
While significant, these mission remain the only two assigned flights for the SLS and Orion. Beyond these, dozens more have been proposed as part of NASA’s three phase plan to reach Mars. For instance, between 2018 and the 2030s, NASA would be responsible for launching a total of 32 missions in order to send the necessary hardware to near-Mars space before making crewed landings on Phobos and then to Mars.
This would be followed by two SLS flights in 2029, bringing the Trans-Earth Injection (TEI) stage to cis-lunar space, followed by a crew to perform the final checks on the Phobos Hab. By 2030, Phase Two (known as the “Proving Ground” phase) would begin with the last elements – the Earth Orbit Insertion (EOI) stage and taxi elements – being launched to cis-lunar orbit, and then all the equipment being sent to near-Mars space for pre-deployment.
By 2031, two more SLS missions would take place, where a Martian Hab would be launched, followed in 2032 by the launches of the Mars Orbit Insertion (MOI) and Trans-Mars Injection (TMI) stages. By 2033, Phase Three (the “Earth Independent” phase) would begin, where the Phobos crew would be transported to the Transit Hab, followed by the final crewed mission to the Martian surface.
Accomplishing all of this would require that NASA commit to making regular launches over the next few years. Such was the feeling of Bill Gerstenmaier – NASA’s Associate Administrator for Human Exploration and Operations – who recently indicated that NASA will need to mount launches at least once a year to establish a “launch cadence” with the SLS.
Mission proposals of this kind were also discussed at the recent Aerospace Safety Advisory Panel (ASAP) meeting – which meets annually to discuss matters relating to NASA’s safety performance. During the course of the meeting, Bill Hill – the Deputy Associate Administrator for Exploration Systems Development (ESD) in NASA’s Human Exploration and Operations Mission Directorate (HEOMD) – provided an overview of the latest developments in NASA’s planned mission.
By and large, the meeting focused on possible concepts for the Mars mission, which included using SEP and chemical propellants for sending hardware to cis-lunar space and near-Mars space, in advance of a mission to Phobos and the Martian surface. Two scenarios were proposed that would rely to these methods to varying extents, both of which called for a total of 32 SLS launches.
However, the outcome of this meeting seemed to indicate that NASA is still thinking over its long-term options and has not yet committed to anything beyond the mission to a near-Earth asteroid. For instance, NASA has indicated that it is laying the groundwork for Phase One of the Mars mission, which calls for flight testing to cis-lunar space.
However, according to Hill, NASA is currently engaged in “Phase 0” of the three phase plan, which involves the use of the ISS to test crew health via long duration space flight. In addition, there are currently no plans for developing Phases Two and Three of the mission. Other problems, such as the Orion spacecraft’s heatshield – which is currently incapable of withstanding the speed of reentry coming all the way from Mar – have yet to be resolved.
Another major issue is that of funding. Thanks to the Obama administration and the passage of the Authorization Act of 2010, NASA has been able to take several crucial steps towards developing their plan for a mission to Mars. However, in order to take things to the next level, the US government will need to show a serious commitment to ensuring that all aspects of the plan get the funding they need.
And given that it is an election year, the budget environment may be changing in the near future. As such, now is the time for the agency to demonstrate that it is fully committed to every phase of its plan to puts boots on the ground of Mars.
On the other hand, NASA has taken some very positive strides in the past six years, and one cannot deny that they are serious about making the mission happen in the time frame it has provided. They are also on track when it comes to proving key concepts and technology.
In the coming years, with flight tests of the SLS and crewed tests of the Orion, they will be even further along. And given the support of both the federal government and the private sector, nothing should stand in the way of human boots touching red soil by the 2030s.
Establishing a human settlement on Mars has been the fevered dream of space agencies for some time. Long before NASA announced its “Journey to Mars” – a plan that outlined the steps that need to be taken to mount a manned mission by the 2030s – the agency’s was planning how a crewed mission could lead to the establishing of stations on the planet’s surface. And it seems that in the coming decades, this could finally become a reality.
But when it comes to establishing a permanent colony – another point of interest when it comes to Mars missions – the coming decades might be a bit too soon. Such was the message during a recent colloquium hosted by NASA’s Future In-Space Operations (FISO) working group. Titled “Selecting a Landing Site for Humans on Mars”, this presentation set out the goals for NASA’s manned mission in the coming decades.
Welcome back to our series on Colonizing the Solar System! Today, we take a look at that cold and dry world known as “Earth’s Twin”. I’m talking about Mars. Enjoy!
Mars. It’s a pretty unforgiving place. On this dry, desiccated world, the average surface temperature is -55 °C (-67 °F). And at the poles, temperatures can reach as low as -153 °C (243 °F). Much of that has to do with its thin atmosphere, which is too thin to retain heat (not to mention breathe). So why then is the idea of colonizing Mars so intriguing to us?
Well, there are a number of reasons, which include the similarities between our two planets, the availability of water, the prospects for generating food, oxygen, and building materials on-site. And there are even long-term benefits to using Mars as a source of raw materials and terraforming it into a liveable environment. Let’s go over them one by one…
Examples in Fiction:
The idea of exploring and settling Mars has been explored in fiction for over a century. Most of the earliest depiction of Mars in fiction involved a planet with canals, vegetation, and indigenous life – owing to the observations of the astronomers like Giovanni Schiaparelli and Percival Lowell.
However, by the latter half of the 20th century (thanks in large part to the Mariner 4 missions and scientists learning of the true conditions on Mars) fictional accounts moved away from the idea of a Martian civilization and began to deal with humans eventually colonizing and transforming the environment to suit their needs.
This shift is perhaps best illustrated by Ray Bradbury’s The Martian Chronicles(published in 1950). A series of short stories that take place predominantly on Mars, the collection begins with stories about a Martian civilization that begins to encounter human explorers. The stories then transition to ones that deal with human settlements on the planet, the genocide of the Martians, and Earth eventually experiencing nuclear war.
During the 1950s, many classic science fiction authors wrote about colonizing Mars. These included Arthur C. Clarke and his 1951 story The Sands of Mars, which is told from the point of view of a human reporter who travels to Mars to write about human colonists. While attempting to make a life for themselves on a desert planet, they discover that Mars has native life forms.
In 1952, Isaac Asimov released The Martian Way, a story that deals with the conflict between Earth and Mars colonists. The latter manage to survive by salvaging space junk and are forced to travel to Saturn to harvest ice when Earth enforces an embargo on their planet.
Robert A. Heinlein’s seminal novel Stranger in a Strange Land (1961) tells the story of a human who was raised on Mars by the native Martians and then travels to Earth as a young adult. His contact with humans proves to have a profound effect on Earth’s culture, and calls into questions many of the social mores and accepted norms of Heinlein’s time.
Philip K. Dick’s fiction also features Mars often, in every case being a dry, empty land with no native inhabitants. In his works Martian Time Slip (1964), and The Three Stigmata of Palmer Eldritch (1965), life on Mars is presented as difficult, consisting of isolated communities who do not want to live there.
In Do Androids Dream of Electric Sheep? (1968), most of humanity has left Earth after a nuclear war and now live in “the colonies” on Mars. Androids (Replicants) escaping illegally to come back to Earth claim that they have left because “nobody should have to live there. It wasn’t conceived for habitation, at least not within the last billion years. It’s so old. You feel it in the stones, the terrible old age”.
Kim Stanley Robinson’s Mars trilogy (published between 1992–1996), Mars is colonized and then terraformed over the course of many centuries. Ben Bova’s Grand Tour series – which deals with the colonization of the Solar System – also includes a novel titled Mars(1992). In this novel, explorers travel to Mars – locations including Mt. Olympus and Valles Marineris – to determine is Mars is worth colonizing.
Alastair Reynolds’ short story “The Great Wall of Mars” (2000) takes place in a future where the most technologically advanced humans are based on Mars and embroiled in an interplanetary war with a faction that takes issue with their experiments in human neurology.
In Hannu Rajaniemi’s The Quantum Thief (2010), we get a glimpse of Mars in the far future. The story centers on the city of Oubliette, which moves across the face of the planet. Andry Weir’s The Martian (2011) takes place in the near future, where an astronaut is stranded on Mars and forced to survive until a rescue party arrives.
Kim Stanley Robinson’s 2312(2012) takes place in a future where humanity has colonized much of the Solar System. Mars is mentioned in the course of the story as a world that has been settled and terraformed (which involved lasers cutting canals similar to what Schiaparelli described) and now has oceans covering much of its surface.
NASA’s proposed manned mission to Mars – which is slated to take place during the 2030s using the Orion Multi-Purpose Crew Vehicle (MPCV) and the Space Launch System (SLS) – is not the only proposal to send humans to the Red Planet. In addition to other federal space agencies, there are also plans by private corporations and non-profits, some of which are far more ambitious than mere exploration.
The European Space Agency (ESA) has long-term plans to send humans, though they have yet to build a manned spacecraft. Roscosmos, the Russian Federal Space Agency, is also planning a manned Mars mission, with simulations (called Mars-500) having been completed in Russia back in 2011. The ESA is currently participating in these simulations as well.
In 2012, a group of Dutch entrepreneurs revealed plans for a crowdfunded campaign to establish a human Mars base, beginning in 2023. Known as Mars One, the plan calls for a series of one-way missions to establish a permanent and expanding colony on Mars, which would be financed with the help of media participation.
Other details of the MarsOne plan include sending a telecom orbiter by 2018, a rover in 2020, and the base components and its settlers by 2023. The base would be powered by 3,000 square meters of solar panels and the SpaceX Falcon 9 Heavy rocket would be used to launch the hardware. The first crew of 4 astronauts would land on Mars in 2025; then, every two years, a new crew of 4 astronauts would arrive.
On December 2nd, 2014, NASA’s Advanced Human Exploration Systems and Operations Mission Director Jason Crusan and Deputy Associate Administrator for Programs James Reuther announced tentative support for the Boeing “Affordable Mars Mission Design.” Currently planned for the 2030s, the mission profile includes plans for radiation shielding, centrifugal artificial gravity, in-transit consumable resupply, and a return-lander.
SpaceX and Tesla CEO Elon Musk has also announced plans to establish a colony on Mars with a population of 80,000 people. Intrinsic to this plan is the development of the Mars Colonial Transporter (MCT), a spaceflight system that would rely on reusable rocket engines, launch vehicles and space capsules to transport humans to Mars and return to Earth.
As of 2014, SpaceX has begun developing the large Raptor rocket engine for the Mars Colonial Transporter, and a successful test was announced in September of 2016. In January 2015, Musk said that he hoped to release details of the “completely new architecture” for the Mars transport system in late 2015.
In June 2016, Musk stated in the first unmanned flight of the Mars transport spacecraft would take place in 2022, followed by the first manned MCT Mars flight departing in 2024. In September 2016, during the 2016 International Astronautical Congress, Musk revealed further details of his plan, which included the design for an Interplanetary Transport System (ITS) and estimated costs.
There may come a day when, after generations of terraforming and numerous waves of colonists, that Mars will begin to have a viable economy as well. This could take the form of mineral deposits being discovered and then sent back to Earth for sale. Launching precious metals, like platinum, off the surface of Mars would be relatively inexpensive thanks to its lower gravity.
But according to Musk, the most likely scenario (at least for the foreseeable future) would involve an economy based on real estate. With human populations exploding all over Earth, a new destination that offers plenty of room to expand is going to look like a good investment.
And once transportation issues are worked out, savvy investors are likely to start buying up land. Plus, there is likely to be a market for scientific research on Mars for centuries to come. Who knows what we might find once planetary surveys really start to open up!
Over time, many or all of the difficulties in living on Mars could be overcome through the application of geoengineering (aka. terraforming). Using organisms like cyanobacteria and phytoplankton, colonists could gradually convert much of the CO² in the atmosphere into breathable oxygen.
In addition, it is estimated that there is a significant amount of carbon dioxide (CO²) in the form of dry ice at the Martian south pole, not to mention absorbed by in the planet’s regolith (soil). If the temperature of the planet were raised, this ice would sublimate into gas and increase atmospheric pressure. Although it would still not be breathable by humans, it would be sufficient enough to eliminate the need for pressure suits.
A possible way of doing this is by deliberately triggering a greenhouse effect on the planet. This could be done by importing ammonia ice from the atmospheres of other planets in our Solar System. Because ammonia (NH³) is mostly nitrogen by weight, it could also supply the buffer gas needed for a breathable atmosphere – much as it does here on Earth.
Similarly, it would be possible to trigger a greenhouse effect by importing hydrocarbons like methane – which is common in Titan’s atmosphere and on its surface. This methane could be vented into the atmosphere where it would act to compound the greenhouse effect.
Zubrin and Chris McKay, an astrobiologist with NASA’s Ames Research center, have also suggested creating facilities on the surface that could pump greenhouse gases into the atmosphere, thus triggering global warming (much as they do here on Earth).
Other possibilities exist as well, ranging from orbital mirrors that would heat the surface to deliberately impacting the surface with comets. But regardless of the method, possibilities exist for transforming Mars’ environment that could make it more suitable for humans in the long run – many of which we are currently doing right here on Earth (with less positive results).
Another proposed solution is building habitats underground. By building a series of tunnels that connect between subterranean habitats, settlers could forgo the need for oxygen tanks and pressure suits when they are away from home.
Additionally, it would provide protection against radiation exposure. Based on data obtained by the Mars Reconnaissance Orbiter, it is also speculated that habitable environments exist underground, making it an even more attractive option.
As already mentioned, there are many interesting similarities between Earth and Mars that make it a viable option for colonization. For starters, Mars and Earth have very similar lengths of days. A Martian day is 24 hours and 39 minutes, which means that plants and animals – not to mention human colonists – would find that familiar.
Mars also has an axial tilt that is very similar to Earth’s, which means it has the same basic seasonal patterns as our planet (albeit for longer periods of time). Basically, when one hemisphere is pointed towards the Sun, it experiences summer while the other experiences winter – complete with warmer temperatures and longer days.
This too would work well when it comes to growing seasons and would provide colonists with a comforting sense of familiarity and a way of measuring out the year. Much like farmers here on Earth, native Martians would experience a “growing season”, a “harvest”, and would be able to hold annual festivities to mark the changing of the seasons.
Also, much like Earth, Mars exists within our Sun’s habitable zone (aka. “Goldilocks zone“), though it is slightly towards its outer edge. Venus is similarly located within this zone, but its location on the inner edge (combined with its thick atmosphere) has led to it becoming the hottest planet in the Solar System. That, combined with its sulfuric acid rains makes Mars a much more attractive option.
Additionally, Mars is closer to Earth than the other Solar planets – except for Venus, but we already covered why it’s not a very good option! This would make the process of colonizing it easier. In fact, every few years when the Earth and Mars are at opposition – i.e. when they are closest to each other – the distance varies, making certain “launch windows” ideal for sending colonists.
For example, on April 8th, 2014, Earth and Mars were 92.4 million km (57.4 million miles) apart at opposition. On May 22nd, 2016, they will be 75.3 million km (46.8 million miles) apart, and by July 27th of 2018, a meager 57.6 million km (35.8 million miles) will separate our two worlds. During these windows, getting to Mars would be a matter of months rather than years.
Also, Mars has vast reserves of water in the form of ice. Most of this water ice is located in the polar regions, but surveys of Martian meteorites have suggested that much of it may also be locked away beneath the surface. This water could be extracted and purified for human consumption easily enough.
In his book, The Case for Mars, Robert Zubrin also explains how future human colonists might be able to live off the land when traveling to Mars, and eventually colonize it. Instead of bringing all their supplies from Earth – like the inhabitants of the International Space Station – future colonists would be able to make their own air, water, and even fuel by splitting Martian water into oxygen and hydrogen.
Preliminary experiments have shown that Mars soil could be baked into bricks to create protective structures, which would reduce the amount of material that needs to be shipped to the surface. Earth plants could eventually be grown in Martian soil too, assuming they get enough sunlight and carbon dioxide. Over time, planting on the native soil could also help to create a breathable atmosphere.
Despite the aforementioned benefits, there are also some rather monumental challenges to colonizing the Red Planet. For starters, there is the matter of the average surface temperature, which is anything but hospitable. While temperatures around the equator at midday can reach a balmy 20 °C, at the Curiosity site – the Gale Crater, which is close to the equator – typical nighttime temperatures are as low as -70 °C.
The gravity on Mars is also only about 40% of what we experience on Earth’s, which would make adjusting to it quite difficult. According to a NASA report, the effects of zero-gravity on the human body are quite profound, with a loss of up to 5% muscle mass a week and 1% of bone density a month.
Naturally, these losses would be lower on the surface of Mars, where there is at least some gravity. But permanent settlers would still have to contend with the problems of muscle degeneration and osteoporosis in the long run.
And then there’s the atmosphere, which is unbreathable. About 95% of the planet’s atmosphere is carbon dioxide, which means that in addition to producing breathable air for their habitats, settlers would also not be able to go outside without a pressure suit and bottled oxygen.
Mars also has no global magnetic field comparable to Earth’s geomagnetic field. Combined with a thin atmosphere, this means that a significant amount of ionizing radiation is able to reach the Martian surface.
Thanks to measurements taken by the Mars Odyssey spacecraft’s Mars Radiation Environment Experiment (MARIE), scientists learned that radiation levels in orbit above Mars are 2.5 times higher than at the International Space Station. Levels on the surface would be lower, but would still be higher than human beings are accustomed to.
In fact, a recent paper submitted by a group of MIT researchers – which analyzed the Mars One plan to colonize the planet beginning in 2020 – concluded that the first astronaut would suffocate after 68 days, while the others would die from a combination of starvation, dehydration, or incineration in an oxygen-rich atmosphere.
In short, the challenges to creating a permanent settlement on Mars are numerous, but not necessarily insurmountable. And if we do decide, as individuals and as a species, that Mars is to become a second home for humanity, we will no doubt find creative ways to address them all.
Who knows? Someday, perhaps even within our own lifetimes, there could be real Martians. And they would be us!
Universe Today has many interesting articles about the possibility of humans living on Mars. Here’s a great article by Nancy Atkinson about the possibility of a one-way, one-person trip to Mars
In the past four decades, NASA and other space agencies from around the world have accomplished some amazing feats. Together, they have sent manned missions to the Moon, explored Mars, mapped Venus and Mercury, conducted surveys and captured breathtaking images of the Outer Solar System. However, looking ahead to the next generation of exploration and the more-distant frontiers that remain to be explored, it is clear that new ideas need to be put forward of how to quickly and efficiently reach those destinations.
Basically, this means finding ways to power rockets that are more fuel and cost-effective while still providing the necessary power to get crews, rovers and orbiters to their far-flung destinations. In this respect, NASA has been taking a good look at nuclear fission as a possible means of propulsion.
In fact, according to presentation made by Doctor Michael G. Houts of the NASA Marshall Space Flight Center back in October of 2014, nuclear power and propulsion have the potential to be “game changing technologies for space exploration.”
As the Marshall Space Flight Center’s manager of nuclear thermal research, Dr. Houts is well versed in the benefits it has to offer space exploration. According to the presentation he and fellow staffers made, a fission reactor can be used in a rocket design to create Nuclear Thermal Propulsion (NTP). In an NTP rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.
A second possible method, known as Nuclear Electric Propulsion (NEC), involves the same basic reactor converted its heat and energy into electrical energy which then powers an electrical engine. In both cases, the rocket relies on nuclear fission to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.
Compared to this traditional form of propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. At a steady state, a fission reactor produces an average of 2.5 neutrons per reaction. However, it would only take a single neutron to cause a subsequent fission and produce a chain reaction and provide constant power.
In fact, according to the report, an NTP rocket could generate 200 kWt of power using a single kilogram of uranium for a period of 13 years – which works out of to a fuel efficiency rating of about 45 grams per 1000 MW-hr.
In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This is what is known as specific impulse, which is measured either in terms of kilo-newtons per second per kilogram (kN·s/kg) or in the amount of seconds the rocket can continually fire. This would cut the total amount of propellent needed, thus cutting launch weight and the cost of individual missions. And a more powerful nuclear engine would mean reduced trip times, another cost-cutting measure.
Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design to more advanced and efficient concepts that rely on either a liquid or a gas core.
In the case of a solid-core design, the only type that has ever been built, a reactor made from materials with a very high melting point houses a collection of solid uranium rods which undergo controlled fission. The hydrogen fuel is contained in a separate tank and then passes through tubes around the reactor, gaining heat and converted into plasma before being channeled through the nozzles to achieve thrust.
Using hydrogen propellant, a solid-core design typically delivers specific impulses on the order of 850 to 1000 seconds, which is about twice that of liquid hydrogen-oxygen designs – i.e. the Space Shuttle’s main engine.
However, a significant drawback arises from the fact that nuclear reactions in a solid-core model can create much higher temperatures than the conventional materials can withstand. The cracking of fuel coatings can also result from large temperature variations along the length of the rods, which taken together, sacrifices much of the engine’s potential for performance.
Many of these problems were addressed with the liquid core design, where nuclear fuel is mixed into the liquid hydrogen and allowing the fission reaction to take place in the liquid mixture itself. This design can operate at temperatures above the melting point of the nuclear fuel thanks to the fact that the container wall is actively cooled by the liquid hydrogen. It is also expected to deliver a specific impulse performance of 1300 to 1500 (1.3 to 1.5 kN·s/kg) seconds.
However, compared to the solid-core design, engines of this type are much more complicated, and therefore more expensive and difficult to build. Part of the problem has to do with the time it takes to achieve a fission reaction, which is significantly longer than the time it takes to heat the hydrogen fuel. Therefore, engines of this kind require methods to both trap the fuel inside the engine while simultaneously allowing heated plasma the ability to exit through the nozzle.
The final classification is the gas-core engine, a modification of the liquid-core design that uses rapid circulation to create a ring-shaped pocket of gaseous uranium fuel in the middle of the reactor that is surrounded by liquid hydrogen. In this case, the hydrogen fuel does not touch the reactor wall, so temperatures can be kept below the melting point of the materials used.
An engine of this kind could allow for specific impulses of 3000 to 5000 seconds (30 to 50 kN·s/kg). But in an “open-cycle” design of this kind, the losses of nuclear fuel would be difficult to control. An attempt to remedy this was drafted with the “closed cycle design” – aka. the “nuclear lightbulb” engine – where the gaseous nuclear fuel is contained in a series of super-high-temperature quarts containers.
Although this design is less efficient than the open-cycle design, and has a more in common with the solid-core concept, the limiting factor here is the critical temperature of quartz and not that of the fuel stack. What’s more, the closed-cycle design is expected to still deliver a respectable specific impulse of about 1500–2000 seconds (15–20 kN·s/kg).
However, as Houts indicated, one of the greatest assets nuclear fission has going for it is the long history of service it has enjoyed here on Earth. In addition to commercial reactors providing electricity all over the world, naval vessels (such as aircraft carriers and submarines) have made good use of slow-fission reactors for decades.
Also, NASA has been relying on nuclear reactors to power unmanned craft and rover for over four decades, mainly in the form of Radioisotope Thermoelectric Generators (RTGs) and Radioisotope Heater Units (RHU). In the case of the former, heat is generated by the slow decay of plutonium-238 (Pu-238), which is then converted into electricity. In the case of the latter, the heat itself is used to keep components and ship’s systems warm and running.
These types of generators have been used to power and maintain everything from the Apollo rockets to the Curiosity Rover, as well as countless satellites, orbiters and robots in between. Since its inception,a total of 44 missions have been launched by NASA that have used either RTGs or RHUs, while the former-Soviet space program launched a comparatively solid 33.
Nuclear engines were also considered for a time as a replacement for the J-2, a liquid-fuel cryogenic rocket engine used on the S-II and S-IVB stages on the Saturn V and Saturn I rockets. But despite their being numerous versions of a solid-core reactors produced and tested in the past, none were ever put into service for an actual space flight.
Between 1959 and 1972, the United States tested twenty different sizes and designs during Project Rover and NASA’s Nuclear Engine for Rocket Vehicle Application (NERVA) program. The most powerful engine ever tested was the Phoebus 2a, which during a high-power test operated for a total of 32 minutes – 12 minutes of which were at power levels of more than 4.0 million kilowatts.
But looking to the future, Houts’ and the Marshall Space Flight Center see great potential and many possible applications. Examples cited in the report include long-range satellites that could explore the Outer Solar System and Kuiper Belt, fast, efficient transportation for manned missions throughout the Solar System, and even the provisions of power for settlements on the Moon and Mars someday.
One possibility is to equip NASA’s latest flagship – the Space Launch System (SLS) – with chemically-powered lower-stage engines and a nuclear-thermal engine on its upper stage. The nuclear engine would remain “cold” until the rocket had achieved orbit, at which point the upper stage would be deployed and reactor would be activated to generate thrust.
This concept for a “bimodal” rocket – one which relies on chemical propellants to achieve orbit and a nuclear-thermal engine for propulsion in space – could become the mainstay of NASA and other space agencies in the coming years. According to Houts and others at Marshall, the dramatic increase in efficiency offered by such rockets could also facilitate NASA’s plans to explore Mars by allowing for the reliable delivery of high-mass automated payloads in advance of manned missions.
These same rockets could then be retooled for speed (instead of mass) and used to transport the astronauts themselves to Mars in roughly half the time it would take for a conventional rocket to make the trip. This would not only save on time and cut mission costs, it would also ensure that the astronauts were exposed to less harmful solar radiation during the course of their flight.
To see this vision become reality, Dr. Houts and other researchers from the Marshall Space Center’s Propulsion Research and Development Laboratory are currently conducting NTP-related tests at the Nuclear Thermal Rocket Element Environmental Simulator (or “NTREES”) in Huntsville, Alabama.
Here, they have spent the past few years analyzing the properties of various nuclear fuels in a simulated thermal environment, hoping to learn more about how they might effect engine performance and longevity when it comes to a nuclear-thermal rocket engine.
These tests are slated to run until June of 2015, and are expected to lay the groundwork for large-scale ground tests and eventual full-scale testing in flight. The ultimate goal of all of this is to ensure that a manned mission to Mars takes place by the 2030s, and to provide NASA flight engineers and mission planners with all the information they need to see it through.
But of course, it is also likely to have its share of applications when it comes to future Lunar missions, sending crews to study Near-Earth Objects (NEOs), and sending craft to the Jovian moons and other locations in the outer Solar System. As the report shows, NTP craft can be easily modified using modular components to perform everything from Lunar cargo landings to crewed missions, to surveying Near-Earth Asteroids (NEAs).
The universe is a big place, and space exploration is still very much in its infancy. But if we intend to keep exploring it and reaping the rewards that such endeavors have to offer, our methods will have to mature. NTP is merely one proposed possibility. But unlike Nuclear Pulse Propulsion, the Daedalus concept, anti-matter engines, or the Alcubierre Warp Drive, a rocket that runs on nuclear fission is feasible, practical, and possible within the near-future.
Nuclear thermal research at the Marshall Center is part of NASA’s Advanced Exploration Systems (AES) Division, managed by the Human Exploration and Operations Mission Directorate and including participation by the U.S. Department of Energy.
With robotic spacecraft, we have explored, discovered and expanded our understanding of the Solar System and the Universe at large. Our five senses have long since reached their limits and cannot reveal the presence of new objects or properties without the assistance of extraordinary sensors and optics. Data is returned and is transformed into a format that humans can interpret.
Humans remain confined to low-Earth orbit and forty-three years have passed since humans last escaped the bonds of Earth’s gravity. NASA’s budget is divided between human endeavors and robotic and each year there is a struggle to find balance between development of software and hardware to launch humans or carry robotic surrogates. Year after year, humans continue to advance robotic capabilities and artificial intelligence (A.I.), and with each passing year, it becomes less clear how we will fit ourselves into the future exploration of the Solar System and beyond.
Is it a race in which we are unwittingly partaking that places us against our inventions? And like the aftermath of the Kasparov versus Deep Blue chess match, are we destined to accept a segregation as necessary? Allow robotics, with or without A.I., to do what they do best – explore space and other worlds?
Should we continue to find new ways and better ways to plug ourselves into our surrogates and appreciate with greater detail what they sense and touch? Consider how naturally our children engross themselves in games and virtual reality and how difficult it is to separate them from the technology. Or is this just a prelude and are we all antecedents of future Captain Kirks and Jean Luc Picards?
Approximately 55% of the NASA budget is in the realm of human spaceflight (HSF). This includes specific funds for Orion and SLS and half measures of supporting segments of the NASA agency, such as Cross-Agency Support, Construction and Maintenance. In contrast, appropriations for robotic missions – project development, operations, R&D – represent 39% of the budget.
The appropriation of funds has always favored human spaceflight, primarily because HSF requires costlier, heavier and more complex systems to maintain humans in the hostile environment of space. And while NASA budgets are not nearly weighted 2-to-1 in favor of human spaceflight, few would contest that the return on investment (ROI) is over 2-to-1 in favor of robotic driven exploration of space. And many would scoff at this ratio and counter that 3-to-1 or 4-to-1 is closer to the advantage robots have over humans.
Politics play a significantly bigger role in the choice of appropriations to HSF compared to robotic missions. The latter is distributed among smaller budget projects and operations and HSF has always involved large expensive programs lasting decades. The big programs attract the interest of public officials wanting to bring capital and jobs to their districts or states.
NASA appropriations are complicated further by a rift between the White House and Capitol Hill along party lines. The Democrat-controlled White House has favored robotics and the use of private enterprise to advance NASA while Republicans on the Hill have supported the big human spaceflight projects; further complications are due to political divisions over the issue of Climate Change. How the two parties treat NASA is the opposite to, at least, how the public perceives the party platforms – smaller government or more social programs, less spending and supporting private enterprise. This tug of war is clearly seen in the NASA budget pie chart.
The House reduced the White House request for NASA Space Technology by 15% while increasing the funds for Orion and SLS by 16%. Space Technology represents funds that NASA would use to develop the Asteroid Redirect Mission (ARM), which the Obama administration favors as a foundation for the first use of SLS as part of a human mission to an asteroid. In contrast, the House appropriated $100 million to the Europa mission concept. Due to the delays of Orion and SLS development and anemic funding of ARM, the first use of SLS could be to send a probe to Europa.
While HSF appropriations for Space Ops & Exploration (effectively HSF) increased ~6% – $300 million, NASA Science gained ~2% – $100 million over the 2014 appropriations; ultimately set by Capitol Hill legislators. The Planetary Society, which is the Science Mission Directorate’s (SMD) staunchest supporter, has expressed satisfaction that the Planetary Science budget has nearly reached their recommended $1.5 billion. However, the increase is delivered with the requirement that $100 million shall be used for Europa concept development and is also in contrast to cutbacks in other segments of the SMD budget.
Note also that NASA Education and Public Outreach (EPO) received a significant boost from Republican controlled Capital Hill. In addition to the specific funding – a 2% increase over 2014 and 34% over the White House request, there is $42 million given specifically to the Science Mission Directorate (SMD) for EPO. The Obama Adminstration has attempted to reduce NASA EPO in favor of a consolidated government approach to improve effectiveness and reduce government.
The drive to explore beyond Earth’s orbit and set foot on new worlds is not just a question of finances. In retrospect, it was not finances at all and our remaining shackles to Earth was a choice of vision. Today, politicians and administrators cannot proclaim ‘Let’s do it again! Let’s make a better Shuttle or a better Space Station.’ There is no choice but to go beyond Earth orbit, but where?
While the International Space Station program, led by NASA, now maintains a continued human presence in outer space, more people ask the question, ‘why aren’t we there yet?’ Why haven’t we stepped upon Mars or the Moon again, or anything other than Earth or floating in the void of low-Earth orbit. The answer now resides in museums and in the habitat orbiting the Earth every 90 minutes.
The retired Space Shuttle program and the International Space Station represent the funds expended on human spaceflight over the last 40 years, which is equivalent to the funds and the time necessary to send humans to Mars. Some would argue that the funds and time expended could have meant multiple human missions to Mars and maybe even a permanent presence. But the American human spaceflight program chose a less costly path, one more achievable – staying close to home.
Ultimately, the goal is Mars. Administrators at NASA and others have become comfortable with this proclamation. However, some would say that it is treated more as a resignation. Presidents have been defining the objectives of human spaceflight and then redefining them. The Moon, Lagrangian Points or asteroids as waypoints to eventually land humans on Mars. Partial plans and roadmaps have been constructed by NASA and now politicians have mandated a roadmap. And politicians forced continuation of development of a big rocket; one which needs a clear path to justify its cost to taxpayers. One does need a big rocket to get anywhere beyond low-Earth orbit. However, a cancellation of the Constellation program – to build the replacement for the Shuttle and a new human-rated spacecraft – has meant delays and even more cost overruns.
During the ten years that have transpired to replace the Space Shuttle, with at least five more years remaining, events beyond the control of NASA and the federal government have taken place. Private enterprise is developing several new approaches to lofting payloads to Earth orbit and beyond. More countries have taken on the challenge. Spearheading this activity, independent of NASA or Washington plans, has been Space Exploration Technologies Corporation (SpaceX).
SpaceX’s Falcon 9 and soon to be Falcon Heavy represent alternatives to what was originally envisioned in the Constellation program with Ares I and Ares V. Falcon Heavy will not have the capability of an Ares V but at roughly $100 million per flight versus $600 million per flight for what Ares V has become – the Space Launch System (SLS) – there are those that would argue that ‘time is up.’ NASA has taken too long and the cost of SLS is not justifiable now that private enterprise has developed something cheaper and done so faster. Is Falcon Nine and Heavy “better”, as in NASA administrator Dan Golden’s proclamation – ‘Faster, Better, Cheaper’? Is it better than SLS technology? Is it better simply because its cheaper for lifting each pound of payload? Is it better because it is arriving ready-to-use sooner than SLS?
Humans will always depend on robotic launch vehicles, capsules and habitats laden with technological wonders to make our spaceflight possible. However, once we step out beyond Earth orbit and onto other worlds, what shall we do? From Carl Sagan to Steve Squyres, NASA scientists have stated that a trained astronaut could do in just weeks what the Mars rovers have required years to accomplish. How long will this hold up and is it really true?
Since Chess Champion Garry Kasparov was defeated by IBM’s Deep Blue, there have been 8 two-year periods representing the doubling of transistors in integrated circuits. This is a factor of 256. Arguably, computers have grown 100 times more powerful in the 17 years. However, robotics is not just electronics. It is the confluence of several technologies that have steadily developed over the 40 years that Shuttle technology stood still and at least 20 years that Space Station designs were locked into technological choices. Advances in material science, nano-technology, electro-optics, and software development are equally important.
While human decision making has been capable of spinning its wheels and then making poor choices and logistical errors, the development of robotics altogether is a juggernaut. While appropriations for human spaceflight have always surpassed robotics, advances in robotics have been driven by government investments across numerous agencies and by private enterprise. The noted futurist and inventor Ray Kurzweil who predicts the arrival of the Singularity by around 2045 (his arrival date is not exact) has emphasized that the surpassing of human intellect by machines is inevitable due to the “The Law of Accelerating Returns”. Technological development is a juggernaut.
In the same year that NASA was founded, 1958, the term Singularity was first used by mathematician John von Neumann to describe the arrival of artificial intelligence that surpasses humans.
Unknowingly, this is the foot race that NASA has been in since its creation. The mechanisms and electronics that facilitated landing men on the surface of the Moon never stopped advancing. And in that time span, human decisions and plans for NASA never stopped vacillating or stop locking existing technology into designs; suffering delays and cost overruns before launching humans to space.
So are we destined to arrive on Mars and roam its surface like retired geologists and biologists wandering in the desert with a poking stick or rock hammer? Have we wasted too much time and has the window passed in which human exploration can make discoveries that robotics cannot accomplish faster, better and cheaper? Will Mars just become an art colony where humans can experience new sunrises and setting moons? Or will we segregate ourselves from our robotic surrogates and appreciate our limited skills and go forth into the Universe? Or will we mind meld with robotics and master our own biology just moments after taking our first feeble steps beyond the Earth? |
home :: North America :: USA :: Geography :: Climates and Climatic Regions :: Climatic Regions of the United States
Climates and Climatic Regions, Climatic Regions of the United States
orographic precipitation, eastern woodlands, coniferous forests, Coast Ranges, steppes
Because of its midlatitude location and vast size, the United States experiences a wide variety of climates. At one extreme are the tropical islands of Hawaii; at the other, the arctic conditions of northern Alaska. The majority of Americans live between these two extremes in a group of climatic regions with unique moisture and temperature patterns.
Geographers have traditionally divided the 48 contiguous United States into two broad patterns of continental climate: the humid East and the arid West. The dividing line most often used is 100 degrees west longitude, an imaginary north-south line extending through the Great Plains from Texas to North Dakota.
The humid east receives abundant precipitation throughout the year. Winters in the northern part are very cold with much snowfall. In the southern part, rainfall is plentiful; summers are very hot but winters are mild. Because of its bountiful moisture, the humid east has also traditionally been a very important agricultural area. Once a land of vast forests, early settlers cleared the land as they moved westward. In some areas, cleared lands were cultivated, abused, exhausted, and eroded away. In other areas, vast forests have been replanted, as in the South, the Appalachians, and parts of the Midwest.
A climatic transition zone occurs on either side of the 100 degrees west longitude line. The eastern woodlands gradually give way to tall grass prairies, which in turn give way to steppes, where short grasses flourish. Few natural tall grass prairies exist today on the Plains. Over the past few centuries, farmers cultivated and planted most of the region with corn or wheat.
In the arid West, precipitation diminishes from east to west and eventually reaches the point where it becomes impossible to raise crops without irrigation. Some desert areas of Arizona, Nevada, and southern California receive less than 125 mm (5 in) of precipitation annually. The grazing of livestock is an important agricultural activity in these areas of mesquite bushes and cacti.
Not all of the West is dry. In fact, one of the wettest areas of the United States is located in the Pacific Northwest. On the west-facing slopes of the Cascades and the Coast Ranges, moisture-laden winds blow from the Pacific Ocean and drop their rain on the mountain slopes. This type of mountain-induced rainfall is known as orographic precipitation. It occurs when wet air rises along the slope of a mountain. As the air moves upward into cooler temperature zones, it expands and cools, releasing the moisture as precipitation. Because of this effect, the climate of the Northwest is cool and moist, and the land is covered with vast, coniferous forests.
>> Humid Continental Climates
>> Humid Subtropical
>> Marine West Coast
>> Tropical Rain Forest
Article key phrases: |
These are the main teaching and learning points in this topic. KS2 – National Curriculum links : Science, Design & Technology, History
The raw materials from which we make things come from plants, animals or the Earth. The Earth gives us rocks which are made of different elements. The Earth also gives us coal and oil, which come from plants that grew millions of years ago.
Coal and minerals have to be mined.
Properties of Materials
Materials have different properties which make them useful for different jobs. Products may need materials that are for example hard, strong, flexible, ductile, conductive, absorbent, transparent, or magnetic. Metals are most often mixed with other metals in alloys, to improve their properties. Steel is iron with 4% carbon added, to make it much stronger than pure iron.
Different properties of material are needed for different products.
Metals can be extracted from rocks. For example, iron ores are made of different ferric oxide (iron oxide) minerals that contain iron. Copper ores are copper sulphides, chemical compounds containing copper, sulphur and other elements. To get the metal out of the ore, the ore has to be heated to a high temperature with coal or charcoal.
A piece of iron ore.
Plant and Animal Materials
The Victorians used many naturally occurring materials in their factories. Leather was used for flexible drive belts. The friction of cork made it useful for pulleys. Early machines were often made of wood until cast iron became easily available. Iron is harder than wood and does not bend as much.
Machines driven by leather belts.
Natural raw materials can be changed into man-made substances with new properties. For example we make paper goods from wood and plastics are made from oil. Tyres are made from rubber combined with sulphur. Glass is made from sand and pottery is made from clay.
Pottery made from clay mined in Calderdale.
States of Matter
Some materials can be changed into a different state of matter by heating, and back again by cooling. The three states of matter are solid, liquid and gas. Cast iron changes from solid to liquid at 1538 degrees celsius. It is a good metal for heating into liquid form and pouring into moulds, to cast it into new shapes. Metal materials like this and some plastics, can be fully recycled and reused indefinitely! Other materials are permanently changed when heated.
Steel waiting to be recycled.
Register and login to access all the topic resources. |
It is based on the principle that the resistance of certain semiconductors materials decreases when they are exposed to radiations. In others words such material have high dark resistance and low irradiated resistance. The photoconductive cell is found that when radiations of sufficient energy fall on such photosensitive materials they cause the electrons to break away from their covalent bonds thereby generating electron hole pairs. These charge carriers are created within the material and reduce its resistance.The four materials normally used in photoconductive cells are cadmium sulphide (Cds), thalmium sulphide (TIS), cadmium salenide (CdSe) and lead sulphide (PbS).
The simplified two dimensional model of cds cell is commonly used which is shown in figure 1 and the circuit the circuit symbol is shown in figure2.
The two electrodes are extended in an inter digital pattern in order to increase the contact area with the sensitive material. In this way it is possible to obtain a large ratio of dark to light resistance. An external power supply is necessary to generate a direction and provide a path for the current to flow. The value of applied voltage varies from a few volts to several hundred volts depending on photocell applications. |
The simplest and most abundant element in the universe, hydrogen can be produced from fossil fuels and biomass and even by electrolyzing water. Producing hydrogen with renewable energy and using it in fuel cell vehicles holds the promise of virtually pollution-free transportation and independence from imported petroleum.
Hydrogen is used in fuel cell electric vehicles (FCEV), where hydrogen is passed through a fuel cell to create electricity, and that electricity is used to power an electric motor. A FCEV shares many benefits with plug-in electric vehicles (PEV), but FCEVs have a longer range and can refill much faster than a PEV can charge.
The interest in hydrogen as an alternative transportation fuel stems from its clean-burning qualities, its potential for domestic production, and the fuel cell vehicle’s potential for high efficiency (two to three times more efficient than gasoline vehicles). Hydrogen is considered an alternative fuel under the Energy Policy Act of 1992.
We worked with the Colorado Energy Office to develop a great website, Refuel Colorado, that covers the details of each alternative fuel, including: |
Food loss is becoming an increasingly important issue as the world’s population grows and as pressures on agricultural land and other resources increase. ERS calculates that at the retail and consumer levels, an estimated 133 billion pounds, or 31 percent of the 430 billion pounds of food available for human consumption in the United States in 2010, was not eaten due to cooking and moisture shrinkage; loss from mold, pests, or inadequate climate control; plate waste; and other causes. On a weight basis, the estimated losses were split fairly evenly across six of the nine major food groups, with the added fats and oils group and the eggs, tree nuts, and peanuts group having smaller loss shares. The size of the losses reflects both the relative amounts of food sold and prepared, as well as loss-related characteristics, such as shrinkage during cooking, perishability, consumers’ tastes and preferences, and misjudgments about the amount of food to buy or prepare. This chart appears in "ERS's Food Loss Data Help Inform the Food Waste Discussion" in the June 2013 Amber Waves.
Download higher resolution chart (939 pixels by 940 pixels, 150 dpi) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.