content
stringlengths
275
370k
In Measurement of the Circle, the great Archimedes (c. 287--212 BC) found an approximation for the circumference of a circle of a given radius. Since we know that the circumference and diameter of any circle are related by the formula , this means that if we start with a circle of diameter 1, then Archimedes' approximation for actually provides an approximation for . Archimedes' idea was to approximate the circle using both inscribed and circumscribed (regular) polygons. Below are pictured inscribed and circumscribed octagons. More generally, we would consider inscribed and circumscribed -gons. The inscribed -gon has sides, each of the same length , and the circumscribed -gon has sides, each of the same length . (In truth, we should consider -gons, where M is a positive integer. But, for simplicity, forego this extra generality.) The perimeter of the inscribed -gon, which we denote by , and the perimeter of the circumscribed -gon, which we denote by , are approximations for and so, in this case, are also approximations for : By means of geometric (and what we would now call trigonometric) arguments, Archimedes was able to derive iterative formulas for and , which are reminiscent of the Babylonian algorithm for computing square roots. Neal Carothers - [email protected]
About The Buzz: Poor Nutrition in Infancy Leads to Poor Nutrition Later in Life? WHAT THEY’RE SAYING A recent study set out to gain a better understanding of early childhood eating patterns, in addition to raising awareness of the increasing need for more robust efforts to support parents in helping their babies “build an authentic, joyful relationship with real food.” Findings from the study reveal that babies are consuming too much sugar and sodium and not enough fruits, vegetables and whole grains as they transition from infancy into toddlerhood WHAT THIS MEANS It’s critical that infants receive the nutrients they need to grow into strong toddlers, children, adolescents and ultimately, adults. The dietary habits established in infancy pave the way for similar dietary preferences and tendencies into childhood and adulthood. Many American infants are not adequately nourished, according to this recent study conducted by one of the country’s leading baby food makers.1 Data for the study was derived from the National Health and Nutrition Examination Survey (NHANES), conducted by the Centers for Disease Control and Prevention (CDC). NHANES studies are designed to assess the health and nutritional status of children and adults in the United States.2 NHANES survey data from 2001 through 2012 were analyzed for food and beverage consumption in babies 0-24 months of age. The study showed from 0-6 months, infants were not introduced to any solid foods and consumed formula or breast milk. During the 6-8 month timeframe, infants’ intake of fruits, vegetables and whole grains increased as they were gradually introduced to baby foods. It was during this 6-8 month window that infants had the highest consumption fruits, vegetables and whole grains. This is not surprising, since pureed baby foods largely consist of pureed fruits and vegetables. The study showed that infants transition from baby food to whole food (or from pureed foods to solid food) around nine months. It is in this transition period that an infant’s dietary quality decreases. As infants were introduced to whole, solid foods, there was a significant increase in the amount and frequency of sweets, salty snacks and sugary beverages that they consumed. The main findings from the study are below: - More than 60% of babies are getting fruit; half comes from 100% juice, followed by bananas and apples. - Less than 30% of babies are getting vegetables, and the primary source is potatoes (whole/mashed); by 23 months, the primary source is potatoes in the forms of french fries and potato chips (by comparison, leafy greens make up 1% of consumption). - Close to 30% of babies are drinking sugar-sweetened beverages (fruit drinks and soft drinks); by 23 months that increases to almost 45%. - Almost 40% of babies are eating brownies and cookies. - Nearly 40% of babies are eating crackers and salty snacks. WHY THIS MATTERS Dietary patterns established in infancy influence the likelihood that an infant will become overweight later in life. While this is not destiny, humans are creatures of habit and change only comes with great determination and diligence. This truth rings true especially for eating habits, as eating is so integral social gatherings, holidays and other celebrations. One study showed that children who became obese as early as age two (2) were more likely to be obese as adults.3 Children who eat large quantities of sweets and processed foods will carry those habits into adulthood. This study, in congruence with the growing body of research on this topic, adds to the evidence that long-term good health starts in childhood. Parents of infants and young children should strive to expose their children to as many fruits and vegetables as possible. It is through repeated exposure to a food that a child begins to develop a palate to enjoy the food, especially if the food is initially disliked. For more information, see the recommendations and resources below. Parents of infants and young children should strive to expose their children to as many fruits and vegetables as possible. It is through repeated exposure to a food that a child begins to develop a palate to enjoy the food, especially if the food is initially disliked. Help establish a love for fruits and veggies in children. Here’s how… - Kids in the Kitchen. Making meals as a team will help your child to feel involved and invested during cooking and mealtime. Cooking together also gives parents the opportunity to model healthy eating behaviors for their children by cooking nutritious meals at home and teaching children to learn to cook. Cooking with Your Kids - Let Kids Help. Allow your child to be your helper if they’re not old enough to participate in the cooking process. Young children can wash vegetables or gather cooking supplies (spoons, bowls, etc.) until they’re old enough to peel produce, stir ingredients or mash potatoes. Top 10 Ways Kids can Help in the Kitchen - Gardening. Planting a garden with your child will allow them to witness firsthand the amazing transformation of a tiny seed into a plant that produces edible fruits or veggies for them to enjoy! Start the process together by choosing seeds based on recipes your child enjoys. Also, check out these 5 DIY garden projects for children. - How Much? While there are no dietary guidelines for children under 2 years, do your best to incorporate fruits and vegetables into every meal and snack your baby eats. Skip foods like crackers and cookies and feed your baby whole grains, fruits and veggies instead. The guideline for every age group is to fill half of your plate with fruits and veggies at every meal and snack, and the same is true for your infant. - Variety. Go for new colors, consistencies and textures! Your child might be initially disinterested and unwilling to try new fruits and veggies, but repeated exposure increases the likelihood that they will come to enjoy the food with time. - Make it Fun! Check out these kid-friendly recipes your child will love. Video Center: Selection. Storage. Preparation. How Many Cups Do You Need? Key Nutrients in Fruits & Vegetables Fruit & Veggie Database
OHCs are cylindrical sensorimotor cells located in the Organ of Corti, the auditory organ inside the mammalian inner ear. The name "hair cells" derives from their characteristic apical bundle of stereocilia, a critical element for detection and transduction of sound energy 1. OHCs are able to change shape —elongate, shorten and bend— in response to electrical, mechanical and chemical stimulation, a motor response considered crucial for cochlear amplification of acoustic signals 2. OHC stimulation induces two different motile responses: i) electromotility, a.k.a fast motility, changes in length in the microsecond range derived from electrically-driven conformational changes in motor proteins densely packed in OHC plasma membrane, and ii) slow motility, shape changes in the millisecond to seconds range involving cytoskeletal reorganization 2, 3. OHC bending is associated with electromotility, and result either from an asymmetric distribution of motor proteins in the lateral plasma membrane, or asymmetric electrical stimulation of those motor proteins (e.g., with an electrical field perpendicular to the long axis of the cells) 4. Mechanical and chemical stimuli induce essentially slow motile responses, even though changes in the ionic conditions of the cells and/or their environment can also stimulate the plasma membrane-embedded motor proteins 5, 6. Since OHC motile responses are an essential component of the cochlear amplifier, the qualitative and quantitative analysis of these motile responses at acoustic frequencies (roughly from 20 Hz to 20 kHz in humans) is a very important matter in the field of hearing research 7. The development of new imaging technology combining high-speed videocameras, LED-based illumination systems, and sophisticated image analysis software now provides the ability to perform reliable qualitative and quantitative studies of the motile response of isolated OHCs to an external alternating electrical field (EAEF) 8. This is a simple and non-invasive technique that circumvents most of the limitations of previous approaches 9-11. Moreover, the LED-based illumination system provides extreme brightness with insignificant thermal effects on the samples and, because of the use of video microscopy, optical resolution is at least 10-fold higher than with conventional light microscopy techniques 12. For instance, with the experimental setup described here, changes in cell length of about 20 nm can be routinely and reliably detected at frequencies of 10 kHz, and this resolution can be further improved at lower frequencies. We are confident that this experimental approach will help to extend our understanding of the cellular and molecular mechanisms underlying OHC motility. 21 Related JoVE Articles! Analysis of Pulmonary Dendritic Cell Maturation and Migration during Allergic Airway Inflammation Institutions: McMaster University, Hamilton, University of Toronto. Dendritic cells (DCs) are the key players involved in initiation of adaptive immune response by activating antigen-specific T cells. DCs are present in peripheral tissues in steady state; however in response to antigen stimulation, DCs take up the antigen and rapidly migrate to the draining lymph nodes where they initiate T cell response against the antigen1,2 . Additionally, DCs also play a key role in initiating autoimmune as well as allergic immune response3 DCs play an essential role in both initiation of immune response and induction of tolerance in the setting of lung environment4 . Lung environment is largely tolerogenic, owing to the exposure to vast array of environmental antigens5 . However, in some individuals there is a break in tolerance, which leads to induction of allergy and asthma. In this study, we describe a strategy, which can be used to monitor airway DC maturation and migration in response to the antigen used for sensitization. The measurement of airway DC maturation and migration allows for assessment of the kinetics of immune response during airway allergic inflammation and also assists in understanding the magnitude of the subsequent immune response along with the underlying mechanisms. Our strategy is based on the use of ovalbumin as a sensitizing agent. Ovalbumin-induced allergic asthma is a widely used model to reproduce the airway eosinophilia, pulmonary inflammation and elevated IgE levels found during asthma6,7 . After sensitization, mice are challenged by intranasal delivery of FITC labeled ovalbumin, which allows for specific labeling of airway DCs which uptake ovalbumin. Next, using several DC specific markers, we can assess the maturation of these DCs and can also assess their migration to the draining lymph nodes by employing flow cytometry. Immunology, Issue 65, Medicine, Physiology, Dendritic Cells, allergic airway inflammation, ovalbumin, lymph nodes, lungs, dendritic cell maturation, dendritic cell migration, mediastinal lymph nodes Optimized Protocol for Efficient Transfection of Dendritic Cells without Cell Maturation Institutions: Mount Sinai School of Medicine . Dendritic cells (DCs) can be considered sentinels of the immune system which play a critical role in its initiation and response to infection1 . Detection of pathogenic antigen by naïve DCs is through pattern recognition receptors (PRRs) which are able to recognize specific conserved structures referred to as pathogen-associated molecular patterns (PAMPS). Detection of PAMPs by DCs triggers an intracellular signaling cascade resulting in their activation and transformation to mature DCs. This process is typically characterized by production of type 1 interferon along with other proinflammatory cytokines, upregulation of cell surface markers such as MHCII and CD86 and migration of the mature DC to draining lymph nodes, where interaction with T cells initiates the adaptive immune response2,3 . Thus, DCs link the innate and adaptive immune systems. The ability to dissect the molecular networks underlying DC response to various pathogens is crucial to a better understanding of the regulation of these signaling pathways and their induced genes. It should also help facilitate the development of DC-based vaccines against infectious diseases and tumors. However, this line of research has been severely impeded by the difficulty of transfecting primary DCs4 Virus transduction methods, such as the lentiviral system, are typically used, but carry many limitations such as complexity and bio-hazardous risk (with the associated costs)5,6,7,8 . Additionally, the delivery of viral gene products increases the immunogenicity of those transduced DCs9,10,11,12 . Electroporation has been used with mixed results13,14,15 , but we are the first to report the use of a high-throughput transfection protocol and conclusively demonstrate its utility. In this report we summarize an optimized commercial protocol for high-throughput transfection of human primary DCs, with limited cell toxicity and an absence of DC maturation16 . Transfection efficiency (of GFP plasmid) and cell viability were more than 50% and 70% respectively. FACS analysis established the absence of increase in expression of the maturation markers CD86 and MHCII in transfected cells, while qRT-PCR demonstrated no upregulation of IFNβ . Using this electroporation protocol, we provide evidence for successful transfection of DCs with siRNA and effective knock down of targeted gene RIG-I, a key viral recognition receptor16,17 , at both the mRNA and protein levels. Immunology, Issue 53, Dendritic cells, nucleofection, high-throughput, siRNA, interferon signaling Generation and Labeling of Murine Bone Marrow-derived Dendritic Cells with Qdot Nanocrystals for Tracking Studies Institutions: Ohio University, College of Osteopathic Medicine, Ohio University, Russ College of Engineering and Technology, Ohio University. Dendritic cells (DCs) are professional antigen presenting cells (APCs) found in peripheral tissues and in immunological organs such as thymus, bone marrow, spleen, lymph nodes and Peyer's patches 1-3 . DCs present in peripheral tissues sample the organism for the presence of antigens, which they take up, process and present in their surface in the context of major histocompatibility molecules (MHC). Then, antigen-loaded DCs migrate to immunological organs where they present the processed antigen to T lymphocytes triggering specific immune responses. One way to evaluate the migratory capabilities of DCs is to label them with fluorescent dyes 4 Herewith we demonstrate the use of Qdot fluorescent nanocrystals to label murine bone marrow-derived DC. The advantage of this labeling is that Qdot nanocrystals possess stable and long lasting fluorescence that make them ideal for detecting labeled cells in recovered tissues. To accomplish this, first cells will be recovered from murine bone marrows and cultured for 8 days in the presence of granulocyte macrophage-colony stimulating factor in order to induce DC differentiation. These cells will be then labeled with fluorescent Qdots by short in vitro incubation. Stained cells can be visualized with a fluorescent microscopy. Cells can be injected into experimental animals at this point or can be into mature cells upon in vitro incubation with inflammatory stimuli. In our hands, DC maturation did not determine loss of fluorescent signal nor does Qdot staining affect the biological properties of DCs. Upon injection, these cells can be identified in immune organs by fluorescent microscopy following typical dissection and fixation procedures. Immunology, Issue 52, Dendritic cells, Qdot nanocrystals, labeling, cell tracking, mouse Generation of a Novel Dendritic-cell Vaccine Using Melanoma and Squamous Cancer Stem Cells Institutions: University of Michigan, University of Michigan, University of Michigan. We identified cancer stem cell (CSC)-enriched populations from murine melanoma D5 syngeneic to C57BL/6 mice and the squamous cancer SCC7 syngeneic to C3H mice using ALDEFLUOR/ALDH as a marker, and tested their immunogenicity using the cell lysate as a source of antigens to pulse dendritic cells (DCs). DCs pulsed with ALDHhigh CSC lysates induced significantly higher protective antitumor immunity than DCs pulsed with the lysates of unsorted whole tumor cell lysates in both models and in a lung metastasis setting and a s.c. tumor growth setting, respectively. This phenomenon was due to CSC vaccine-induced humoral as well as cellular anti-CSC responses. In particular, splenocytes isolated from the host subjected to CSC-DC vaccine produced significantly higher amount of IFNγ and GM-CSF than splenocytes isolated from the host subjected to unsorted tumor cell lysate pulsed-DC vaccine. These results support the efforts to develop an autologous CSC-based therapeutic vaccine for clinical use in an adjuvant setting. Cancer Biology, Issue 83, Cancer stem cell (CSC), Dendritic cells (DC), Vaccine, Cancer immunotherapy, antitumor immunity, aldehyde dehydrogenase Generation of Multivirus-specific T Cells to Prevent/treat Viral Infections after Allogeneic Hematopoietic Stem Cell Transplant Institutions: Baylor College of Medicine. Viral infections cause morbidity and mortality in allogeneic hematopoietic stem cell transplant (HSCT) recipients. We and others have successfully generated and infused T-cells specific for Epstein Barr virus (EBV), cytomegalovirus (CMV) and Adenovirus (Adv) using monocytes and EBV-transformed lymphoblastoid cell (EBV-LCL) gene-modified with an adenovirus vector as antigen presenting cells (APCs). As few as 2x105 /kg trivirus-specific cytotoxic T lymphocytes (CTL) proliferated by several logs after infusion and appeared to prevent and treat even severe viral disease resistant to other available therapies. The broader implementation of this encouraging approach is limited by high production costs, complexity of manufacture and the prolonged time (4-6 weeks for EBV-LCL generation, and 4-8 weeks for CTL manufacture – total 10-14 weeks) for preparation. To overcome these limitations we have developed a new, GMP-compliant CTL production protocol. First, in place of adenovectors to stimulate T-cells we use dendritic cells (DCs) nucleofected with DNA plasmids encoding LMP2, EBNA1 and BZLF1 (EBV), Hexon and Penton (Adv), and pp65 and IE1 (CMV) as antigen-presenting cells. These APCs reactivate T cells specific for all the stimulating antigens. Second, culture of activated T-cells in the presence of IL-4 (1,000U/ml) and IL-7 (10ng/ml) increases and sustains the repertoire and frequency of specific T cells in our lines. Third, we have used a new, gas permeable culture device (G-Rex) that promotes the expansion and survival of large cell numbers after a single stimulation, thus removing the requirement for EBV-LCLs and reducing technician intervention. By implementing these changes we can now produce multispecific CTL targeting EBV, CMV, and Adv at a cost per 106 cells that is reduced by >90%, and in just 10 days rather than 10 weeks using an approach that may be extended to additional protective viral antigens. Our FDA-approved approach should be of value for prophylactic and treatment applications for high risk allogeneic HSCT recipients. Immunology, Issue 51, T cells, immunotherapy, viral infections, nucleofection, plasmids, G-Rex culture device Vibratome Sectioning for Enhanced Preservation of the Cytoarchitecture of the Mammalian Organ of Corti Institutions: Medical College of Wisconsin . The mammalian organ of Corti is a highly ordered cellular mosaic of mechanosensory hair and nonsensory supporting cells (reviewed in 1,2 ).Visualization of this cellular mosaic often requires that the organ of Corti is cross-sectioned. In particular, the nonsensory pillar and Deiters' cells, whose nuclei are located basally with respect to the hair cells, cannot be visualized without cross-sectioning the organ of Corti. However, the delicate cytoarchitecture of the mammalian organ of Corti, including the fine cytoplasmic processes of the pillar and Deiters' cells, is difficult to preserve by routine histological procedures such as paraffin and cryo-sectioning, which are compatible with standard immunohistochemical staining techniques. Here I describe a simple and robust procedure consisting of vibratome sectioning of the cochlea, immunohistochemical staining of these vibratome sections in whole mount, followed by confocal microscopy. This procedure has been used widely for immunhistochemical analysis of multiple organs, including the mouse limb bud, zebrafish gut, liver, pancreas, and heart (see 3-6 for selected examples). In addition, this procedure was sucessful for both imaging and quantitificaton of pillar cell number in mutant and control organs of Corti in both embryos and adult mice 7 . This method, however, is currently not widely used to examine the mammalian organ of Corti. The potential for this procedure to both provide enhanced preservation of the fine cytoarchitecture of the adult organ of Corti and allow for quantification of various cell types is described. Neuroscience, Issue 52, vibratome, confocal microscopy, immunofluorescence, organ of Corti, pillar cells Dissection of Adult Mouse Utricle and Adenovirus-mediated Supporting-cell Infection Institutions: Medical University of South Carolina, Medical University of South Carolina, National Institutes of Health. Hearing loss and balance disturbances are often caused by death of mechanosensory hair cells, which are the receptor cells of the inner ear. Since there is no cell line that satisfactorily represents mammalian hair cells, research on hair cells relies on primary organ cultures. The best-characterized in vitro model system of mature mammalian hair cells utilizes organ cultures of utricles from adult mice (Figure 1 . The utricle is a vestibular organ, and the hair cells of the utricle are similar in both structure and function to the hair cells in the auditory organ, the organ of Corti. The adult mouse utricle preparation represents a mature sensory epithelium for studies of the molecular signals that regulate the survival, homeostasis, and death of these cells. Mammalian cochlear hair cells are terminally differentiated and are not regenerated when they are lost. In non-mammalian vertebrates, auditory or vestibular hair cell death is followed by robust regeneration which restores hearing and balance functions 7, 8 . Hair cell regeneration is mediated by glia-like supporting cells, which contact the basolateral surfaces of hair cells in the sensory epithelium 9, 10 . Supporting cells are also important mediators of hair cell survival and death 11 . We have recently developed a technique for infection of supporting cells in cultured utricles using adenovirus. Using adenovirus type 5 (dE1/E3) to deliver a transgene containing GFP under the control of the CMV promoter, we find that adenovirus specifically and efficiently infects supporting cells. Supporting cell infection efficiency is approximately 25-50%, and hair cells are not infected (Figure 2 ). Importantly, we find that adenoviral infection of supporting cells does not result in toxicity to hair cells or supporting cells, as cell counts in Ad-GFP infected utricles are equivalent to those in non-infected utricles (Figure 3 ). Thus adenovirus-mediated gene expression in supporting cells of cultured utricles provides a powerful tool to study the roles of supporting cells as mediators of hair cell survival, death, and regeneration. Neuroscience, Issue 61, Hair cell, ototoxicity, hearing loss, organ culture Postsynaptic Recordings at Afferent Dendrites Contacting Cochlear Inner Hair Cells: Monitoring Multivesicular Release at a Ribbon Synapse Institutions: The Johns Hopkins School of Medicine, Consejo Nacional de Investigaciones Científicas y Técnicas. The afferent synapse between the inner hair cell (IHC) and the auditory nerve fiber provides an electrophysiologically accessible site for recording the postsynaptic activity of a single ribbon synapse 1-4 . Ribbon synapses of sensory cells release neurotransmitter continuously, the rate of which is modulated in response to graded changes in IHC membrane potential 5 . Ribbon synapses have been shown to operate by multivesicular release, where multiple vesicles can be released simultaneously to evoke excitatory postsynaptic currents (EPSCs) of varying amplitudes 1, 4, 6-11 . Neither the role of the presynaptic ribbon, nor the mechanism underlying multivesicular release is currently well understood. The IHC is innervated by 10-20 auditory nerve fibers, and every fiber contacts the IHC with a unmyelinated single ending to form a single ribbon synapse. The small size of the afferent boutons contacting IHCs (approximately 1 μm in diameter) enables recordings with exceptional temporal resolution to be made. Furthermore, the technique can be adapted to record from both pre- and postsynaptic cells simultaneously, allowing the transfer function at the synapse to be studied directly 2 . This method therefore provides a means by which fundamental aspects of neurotransmission can be studied, from multivesicular release to the elusive function of the ribbon in sensory cells. Neuroscience, Issue 48, electrophysiology, whole-cell recording, patch clamp, synaptic transmission, ribbon synapse, multivesicular, dendrite, auditory nerve, hearing, hair cell. Primary Culture and Plasmid Electroporation of the Murine Organ of Corti. Institutions: Harvard Medical School, Massachusetts Eye and Ear Infirmary, Emerson College, Harvard. In all mammals, the sensory epithelium for audition is located along the spiraling organ of Corti that resides within the conch shaped cochlea of the inner ear (fig 1). Hair cells in the developing cochlea, which are the mechanosensory cells of the auditory system, are aligned in one row of inner hair cells and three (in the base and mid-turns) to four (in the apical turn) rows of outer hair cells that span the length of the organ of Corti. Hair cells transduce sound-induced mechanical vibrations of the basilar membrane into neural impulses that the brain can interpret. Most cases of sensorineural hearing loss are caused by death or dysfunction of cochlear hair cells. An increasingly essential tool in auditory research is the isolation and in vitro culture of the organ explant 1,2,9 . Once isolated, the explants may be utilized in several ways to provide information regarding normative, anomalous, or therapeutic physiology. Gene expression, stereocilia motility, cell and molecular biology, as well as biological approaches for hair cell regeneration are examples of experimental applications of organ of Corti explants. This protocol describes a method for the isolation and culture of the organ of Corti from neonatal mice. The accompanying video includes stepwise directions for the isolation of the temporal bone from mouse pups, and subsequent isolation of the cochlea, spiral ligament, and organ of Corti. Once isolated, the sensory epithelium can be plated and cultured in vitro in its entirety, or as a further dissected micro-isolate that lacks the spiral limbus and spiral ganglion neurons. Using this method, primary explants can be maintained for 7-10 days. As an example of the utility of this procedure, organ of Corti explants will be electroporated with an exogenous DsRed reporter gene. This method provides an improvement over other published methods because it provides reproducible, unambiguous, and stepwise directions for the isolation, microdissection, and primary culture of the organ of Corti. Neuroscience, Issue 36, hearing, mice, cochlea, organ of Corti, organotypic, culture, hair cell, stem cell, gene expression, in vitro Long-term Time Lapse Imaging of Mouse Cochlear Explants Institutions: Sunnybrook Research Institute, University of Toronto, University of Toronto. Here we present a method for long-term time-lapse imaging of live embryonic mouse cochlear explants. The developmental program responsible for building the highly ordered, complex structure of the mammalian cochlea proceeds for around ten days. In order to study changes in gene expression over this period and their response to pharmaceutical or genetic manipulation, long-term imaging is necessary. Previously, live imaging has typically been limited by the viability of explanted tissue in a humidified chamber atop a standard microscope. Difficulty in maintaining optimal conditions for culture growth with regard to humidity and temperature has placed limits on the length of imaging experiments. A microscope integrated into a modified tissue culture incubator provides an excellent environment for long term-live imaging. In this method we demonstrate how to establish embryonic mouse cochlear explants and how to use an incubator microscope to conduct time lapse imaging using both bright field and fluorescent microscopy to examine the behavior of a typical embryonic day (E) 13 cochlear explant and Sox2, a marker of the prosensory cells of the cochlea, over 5 days. Bioengineering, Issue 93, Live-imaging, time lapse, cochlea, ear, reporter mouse, development, incubator microscope, Sox2 Electric Cell-substrate Impedance Sensing for the Quantification of Endothelial Proliferation, Barrier Function, and Motility Institutions: Institute for Cardiovascular Research, VU University Medical Center, Institute for Cardiovascular Research, VU University Medical Center. Electric Cell-substrate Impedance Sensing (ECIS) is an in vitro impedance measuring system to quantify the behavior of cells within adherent cell layers. To this end, cells are grown in special culture chambers on top of opposing, circular gold electrodes. A constant small alternating current is applied between the electrodes and the potential across is measured. The insulating properties of the cell membrane create a resistance towards the electrical current flow resulting in an increased electrical potential between the electrodes. Measuring cellular impedance in this manner allows the automated study of cell attachment, growth, morphology, function, and motility. Although the ECIS measurement itself is straightforward and easy to learn, the underlying theory is complex and selection of the right settings and correct analysis and interpretation of the data is not self-evident. Yet, a clear protocol describing the individual steps from the experimental design to preparation, realization, and analysis of the experiment is not available. In this article the basic measurement principle as well as possible applications, experimental considerations, advantages and limitations of the ECIS system are discussed. A guide is provided for the study of cell attachment, spreading and proliferation; quantification of cell behavior in a confluent layer, with regard to barrier function, cell motility, quality of cell-cell and cell-substrate adhesions; and quantification of wound healing and cellular responses to vasoactive stimuli. Representative results are discussed based on human microvascular (MVEC) and human umbilical vein endothelial cells (HUVEC), but are applicable to all adherent growing cells. Bioengineering, Issue 85, ECIS, Impedance Spectroscopy, Resistance, TEER, Endothelial Barrier, Cell Adhesions, Focal Adhesions, Proliferation, Migration, Motility, Wound Healing Preparation of Tumor Antigen-loaded Mature Dendritic Cells for Immunotherapy Institutions: NYU Langone Medical Center, NYU Langone Medical Center. While clinical studies have established that antigen-loaded DC vaccines are safe and promising therapy for tumors 1 , their clinical efficacy remains to be established. The method described below, prepared in accordance with Good Manufacturing Process (GMP) guidelines, is an optimization of the most common ex vivo preparation method for generating large numbers of DCs for clinical studies 2 Our method utilizes the synthetic TLR 3 agonist Polyinosinic-Polycytidylic Acid-poly-L-lysine Carboxymethylcellulose (Poly-ICLC) to stimulate the DCs. Our previous study established that Poly-ICLC is the most potent individual maturation stimulus for human DCs as assessed by an upregulation of CD83 and CD86, induction of interleukin-12 (IL-12), tumor necrosis factor (TNF), interferon gamma-induced protein 10 (IP-10), interleukmin 1 (IL-1), and type I interferons (IFN), and minimal interleukin 10 (IL-10) production. DCs are differentiated from frozen peripheral blood mononuclear cells (PBMCs) obtained by leukapheresis. PBMCs are isolated by Ficoll gradient centrifugation and frozen in aliquots. On Day 1, PBMCs are thawed and plated onto tissue culture flasks to select for monocytes which adhere to the plastic surface after 1-2 hr incubation at 37 °C in the tissue culture incubator. After incubation, the lymphocytes are washed off and the adherent monocytes are cultured for 5 days in the presence of interleukin-4 (IL-4) and granulocyte macrophage-colony stimulating factor (GM-CSF) to differentiate to immature DCs. On Day 6, immature DCs are pulsed with the keyhole limpet hemocyanin (KLH) protein which serves as a control for the quality of the vaccine and may boost the immunogenicity of the vaccine 3 . The DCs are stimulated to mature, loaded with peptide antigens, and incubated overnight. On Day 7, the cells are washed, and frozen in 1 ml aliquots containing 4 - 20 x 106 cells using a controlled-rate freezer. Lot release testing for the batches of DCs is performed and must meet minimum specifications before they are injected into patients. Cancer Biology, Issue 78, Medicine, Immunology, Molecular Biology, Cellular Biology, Biomedical Engineering, Anatomy, Physiology, Dendritic Cells, Immunotherapy, dendritic cell, immunotherapy, vaccine, cell, isolation, flow cytometry, cell culture, clinical techniques Mechanical Stimulation-induced Calcium Wave Propagation in Cell Monolayers: The Example of Bovine Corneal Endothelial Cells Institutions: KU Leuven. Intercellular communication is essential for the coordination of physiological processes between cells in a variety of organs and tissues, including the brain, liver, retina, cochlea and vasculature. In experimental settings, intercellular Ca2+ -waves can be elicited by applying a mechanical stimulus to a single cell. This leads to the release of the intracellular signaling molecules IP3 that initiate the propagation of the Ca2+ -wave concentrically from the mechanically stimulated cell to the neighboring cells. The main molecular pathways that control intercellular Ca2+ -wave propagation are provided by gap junction channels through the direct transfer of IP3 and by hemichannels through the release of ATP. Identification and characterization of the properties and regulation of different connexin and pannexin isoforms as gap junction channels and hemichannels are allowed by the quantification of the spread of the intercellular Ca2+ -wave, siRNA, and the use of inhibitors of gap junction channels and hemichannels. Here, we describe a method to measure intercellular Ca2+ -wave in monolayers of primary corneal endothelial cells loaded with Fluo4-AM in response to a controlled and localized mechanical stimulus provoked by an acute, short-lasting deformation of the cell as a result of touching the cell membrane with a micromanipulator-controlled glass micropipette with a tip diameter of less than 1 μm. We also describe the isolation of primary bovine corneal endothelial cells and its use as model system to assess Cx43-hemichannel activity as the driven force for intercellular Ca2+ -waves through the release of ATP. Finally, we discuss the use, advantages, limitations and alternatives of this method in the context of gap junction channel and hemichannel research. Cellular Biology, Issue 77, Molecular Biology, Medicine, Biomedical Engineering, Biophysics, Immunology, Ophthalmology, Gap Junctions, Connexins, Connexin 43, Calcium Signaling, Ca2+, Cell Communication, Paracrine Communication, Intercellular communication, calcium wave propagation, gap junctions, hemichannels, endothelial cells, cell signaling, cell, isolation, cell culture Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine. In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions. Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase Membrane Potentials, Synaptic Responses, Neuronal Circuitry, Neuromodulation and Muscle Histology Using the Crayfish: Student Laboratory Exercises Institutions: University of Kentucky, University of Toronto. The purpose of this report is to help develop an understanding of the effects caused by ion gradients across a biological membrane. Two aspects that influence a cell's membrane potential and which we address in these experiments are: (1) Ion concentration of K+ on the outside of the membrane, and (2) the permeability of the membrane to specific ions. The crayfish abdominal extensor muscles are in groupings with some being tonic (slow) and others phasic (fast) in their biochemical and physiological phenotypes, as well as in their structure; the motor neurons that innervate these muscles are correspondingly different in functional characteristics. We use these muscles as well as the superficial, tonic abdominal flexor muscle to demonstrate properties in synaptic transmission. In addition, we introduce a sensory-CNS-motor neuron-muscle circuit to demonstrate the effect of cuticular sensory stimulation as well as the influence of neuromodulators on certain aspects of the circuit. With the techniques obtained in this exercise, one can begin to answer many questions remaining in other experimental preparations as well as in physiological applications related to medicine and health. We have demonstrated the usefulness of model invertebrate preparations to address fundamental questions pertinent to all animals. Neuroscience, Issue 47, Invertebrate, Crayfish, neurophysiology, muscle, anatomy, electrophysiology Modeling Biological Membranes with Circuit Boards and Measuring Electrical Signals in Axons: Student Laboratory Exercises Institutions: University of Kentucky, University of Toronto. This is a demonstration of how electrical models can be used to characterize biological membranes. This exercise also introduces biophysical terminology used in electrophysiology. The same equipment is used in the membrane model as on live preparations. Some properties of an isolated nerve cord are investigated: nerve action potentials, recruitment of neurons, and responsiveness of the nerve cord to environmental factors. Basic Protocols, Issue 47, Invertebrate, Crayfish, Modeling, Student laboratory, Nerve cord Cut-loading: A Useful Tool for Examining the Extent of Gap Junction Tracer Coupling Between Retinal Neurons Institutions: Ohio State University College of Medicine, University of Texas Medical School. In addition to chemical synaptic transmission, neurons that are connected by gap junctions can also communicate rapidly via electrical synaptic transmission. Increasing evidence indicates that gap junctions not only permit electrical current flow and synchronous activity between interconnected or coupled cells, but that the strength or effectiveness of electrical communication between coupled cells can be modulated to a great extent1,2 . In addition, the large internal diameter (~1.2 nm) of many gap junction channels permits not only electric current flow, but also the diffusion of intracellular signaling molecules and small metabolites between interconnected cells, so that gap junctions may also mediate metabolic and chemical communication. The strength of gap junctional communication between neurons and its modulation by neurotransmitters and other factors can be studied by simultaneously electrically recording from coupled cells and by determining the extent of diffusion of tracer molecules, which are gap junction permeable, but not membrane permeable, following iontophoretic injection into single cells. However, these procedures can be extremely difficult to perform on neurons with small somata in intact neural tissue. Numerous studies on electrical synapses and the modulation of electrical communication have been conducted in the vertebrate retina, since each of the five retinal neuron types is electrically connected by gap junctions3,4 . Increasing evidence has shown that the circadian (24-hour) clock in the retina and changes in light stimulation regulate gap junction coupling3-8 . For example, recent work has demonstrated that the retinal circadian clock decreases gap junction coupling between rod and cone photoreceptor cells during the day by increasing dopamine D2 receptor activation, and dramatically increases rod-cone coupling at night by reducing D2 receptor activation7,8 . However, not only are these studies extremely difficult to perform on neurons with small somata in intact neural retinal tissue, but it can be difficult to adequately control the illumination conditions during the electrophysiological study of single retinal neurons to avoid light-induced changes in gap junction conductance. Here, we present a straightforward method of determining the extent of gap junction tracer coupling between retinal neurons under different illumination conditions and at different times of the day and night. This cut-loading technique is a modification of scrape loading9-12 , which is based on dye loading and diffusion through open gap junction channels. Scrape loading works well in cultured cells, but not in thick slices such as intact retinas. The cut-loading technique has been used to study photoreceptor coupling in intact fish and mammalian retinas7, 8,13 , and can be used to study coupling between other retinal neurons, as described here. Neuroscience, Issue 59, retina, photoreceptors, gap junctions, tracer coupling, neurobiotin, labeling Non-invasive Optical Measurement of Cerebral Metabolism and Hemodynamics in Infants Institutions: Massachusetts General Hospital, Harvard Medical School, Université de Caen Basse-Normandie, Boston Children's Hospital, Harvard Medical School, ISS, INC.. Perinatal brain injury remains a significant cause of infant mortality and morbidity, but there is not yet an effective bedside tool that can accurately screen for brain injury, monitor injury evolution, or assess response to therapy. The energy used by neurons is derived largely from tissue oxidative metabolism, and neural hyperactivity and cell death are reflected by corresponding changes in cerebral oxygen metabolism (CMRO2 ). Thus, measures of CMRO2 are reflective of neuronal viability and provide critical diagnostic information, making CMRO2 an ideal target for bedside measurement of brain health. Brain-imaging techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) yield measures of cerebral glucose and oxygen metabolism, but these techniques require the administration of radionucleotides, so they are used in only the most acute cases. Continuous-wave near-infrared spectroscopy (CWNIRS) provides non-invasive and non-ionizing radiation measures of hemoglobin oxygen saturation (SO2 ) as a surrogate for cerebral oxygen consumption. However, SO2 is less than ideal as a surrogate for cerebral oxygen metabolism as it is influenced by both oxygen delivery and consumption. Furthermore, measurements of SO2 are not sensitive enough to detect brain injury hours after the insult 1,2 , because oxygen consumption and delivery reach equilibrium after acute transients 3 . We investigated the possibility of using more sophisticated NIRS optical methods to quantify cerebral oxygen metabolism at the bedside in healthy and brain-injured newborns. More specifically, we combined the frequency-domain NIRS (FDNIRS) measure of SO2 with the diffuse correlation spectroscopy (DCS) measure of blood flow index (CBFi ) to yield an index of CMRO2 With the combined FDNIRS/DCS system we are able to quantify cerebral metabolism and hemodynamics. This represents an improvement over CWNIRS for detecting brain health, brain development, and response to therapy in neonates. Moreover, this method adheres to all neonatal intensive care unit (NICU) policies on infection control and institutional policies on laser safety. Future work will seek to integrate the two instruments to reduce acquisition time at the bedside and to implement real-time feedback on data quality to reduce the rate of data rejection. Medicine, Issue 73, Developmental Biology, Neurobiology, Neuroscience, Biomedical Engineering, Anatomy, Physiology, Near infrared spectroscopy, diffuse correlation spectroscopy, cerebral hemodynamic, cerebral metabolism, brain injury screening, brain health, brain development, newborns, neonates, imaging, clinical techniques In Vitro Analysis of Myd88-mediated Cellular Immune Response to West Nile Virus Mutant Strain Infection Institutions: The University of Texas Medical Branch, The University of Texas Medical Branch, The University of Texas Medical Branch. An attenuated West Nile virus (WNV), a nonstructural (NS) 4B-P38G mutant, induced higher innate cytokine and T cell responses than the wild-type WNV in mice. Recently, myeloid differentiation factor 88 (MyD88) signaling was shown to be important for initial T cell priming and memory T cell development during WNV NS4B-P38G mutant infection. In this study, two flow cytometry-based methods – an in vitro T cell priming assay and an intracellular cytokine staining (ICS) – were utilized to assess dendritic cells (DCs) and T cell functions. In the T cell priming assay, cell proliferation was analyzed by flow cytometry following co-culture of DCs from both groups of mice with carboxyfluorescein succinimidyl ester (CFSE) - labeled CD4+ T cells of OTII transgenic mice. This approach provided an accurate determination of the percentage of proliferating CD4+ T cells with significantly improved overall sensitivity than the traditional assays with radioactive reagents. A microcentrifuge tube system was used in both cell culture and cytokine staining procedures of the ICS protocol. Compared to the traditional tissue culture plate-based system, this modified procedure was easier to perform at biosafety level (BL) 3 facilities. Moreover, WNV- infected cells were treated with paraformaldehyde in both assays, which enabled further analysis outside BL3 facilities. Overall, these in vitro immunological assays can be used to efficiently assess cell-mediated immune responses during WNV infection. Immunology, Issue 93, West Nile Virus, Dendritic cells, T cells, cytokine, proliferation, in vitro Isolation of Murine Lymph Node Stromal Cells Institutions: University of Basel and University Hospital Basel. Secondary lymphoid organs including lymph nodes are composed of stromal cells that provide a structural environment for homeostasis, activation and differentiation of lymphocytes. Various stromal cell subsets have been identified by the expression of the adhesion molecule CD31 and glycoprotein podoplanin (gp38), T zone reticular cells or fibroblastic reticular cells, lymphatic endothelial cells, blood endothelial cells and FRC-like pericytes within the double negative cell population. For all populations different functions are described including, separation and lining of different compartments, attraction of and interaction with different cell types, filtration of the draining fluidics and contraction of the lymphatic vessels. In the last years, different groups have described an additional role of stromal cells in orchestrating and regulating cytotoxic T cell responses potentially dangerous for the host. Lymph nodes are complex structures with many different cell types and therefore require a appropriate procedure for isolation of the desired cell populations. Currently, protocols for the isolation of lymph node stromal cells rely on enzymatic digestion with varying incubation times; however, stromal cells and their surface molecules are sensitive to these enzymes, which results in loss of surface marker expression and cell death. Here a short enzymatic digestion protocol combined with automated mechanical disruption to obtain viable single cells suspension of lymph node stromal cells maintaining their surface molecule expression is proposed. Immunology, Issue 90, lymph node, lymph node stromal cells, digestion, isolation, enzymes, fibroblastic reticular cell, lymphatic endothelial cell, blood endothelial cell Culture of myeloid dendritic cells from bone marrow precursors Institutions: McMaster University, McMaster University, University of Waterloo. Myeloid dendritic cells (DCs) are frequently used to study the interactions between innate and adaptive immune mechanisms and the early response to infection. Because these are the most potent antigen presenting cells, DCs are being increasingly used as a vaccine vector to study the induction of antigen-specific immune responses. In this video, we demonstrate the procedure for harvesting tibias and femurs from a donor mouse, processing the bone marrow and differentiating DCs in vitro. The properties of DCs change following stimulation: immature dendritic cells are potent phagocytes, whereas mature DCs are capable of antigen presentation and interaction with CD4+ and CD8+ T cells. This change in functional activity corresponds with the upregulation of cell surface markers and cytokine production. Many agents can be used to mature DCs, including cytokines and toll-like receptor ligands. In this video, we demonstrate flow cytometric comparisons of expression of two co-stimulatory molecules, CD86 and CD40, and the cytokine, IL-12, following overnight stimulation with CpG or mock treatment. After differentiation, DCs can be further manipulated for use as a vaccine vector or to generate antigen-specific immune responses by in vitro pulsing using peptides or proteins, or transduced using recombinant viral vectors. Immunology, Issue 17, dendritic cells, GM-CSF, culture, bone marrow
Green papaya fruit is rich in papain, used for in powdered meat tenderizers. Courtesy of Hardyplants. - Length: 30 Minutes Students learn about digestion and proteins by observing the action of meat tenderizer on luncheon meat. Student sheets are provided in English and in Spanish. This activity is from The Science of Food Teacher's Guide. Although it is most appropriate for use with students in grades 3-5, the lessons are easily adaptable for other grade levels. The guide also is available in print format. Food must be broken down, both physically and chemically, before it can be used by the cells within an organism. The process of breaking food down into usable components is known as digestion. Within the human body, digestion begins in the mouth, where pieces of food are mechanically broken, by chewing, into smaller pieces. In addition saliva mixes with the food and begins to break it down. After food is swallowed, other components of the digestive system—stomach, small intestine, large intestine, liver and pancreas—continue the process of making food available for use by cells in the body. The stomach serves as a powerful mixing machine in which food is combined with special chemicals (enzymes) that begin to break large food molecules into smaller ones. Food usually stays in the stomach for two to three hours, after which it passes into the small intestine, where it is combined with secretions from the liver and pancreas. These very important organs produces substances (bile from the liver and pancreatic fluid from the pancreas) that help break down fats, proteins and carbohydrates into smaller molecules. The small intestine is responsible for absorbing the nutrients released during digestion. The walls of the small intestine are covered with millions of tiny, finger-like projections called villi. These structures increase the surface area of the small intestine to facilitate the absorption of nutrients into the bloodstream. Proteins and their building blocks (amino acids) are vital to every cell in the body. Humans are not able to make their own amino acids, so they must include protein (equivalent to 4 ounces of chicken white meat) in their daily diet. During digestion, proteins are broken down into the different amino acids of which they are made. Then the body builds new proteins from the amino acids. You might say that the amino acids are recycled! This activity will allow students to observe how chemicals in the body begin to break down proteins. Objectives and Standards Food must be broken down into smaller units before it can be used by the body. Digestion is the process of breaking food down. Special chemicals in the body break food molecules into smaller units. Proteins—found in all meats, dairy products and vegetables (especially peas and beans)—are important for muscles and cell growth and repair. Science, Health and Math Skills Making qualitative observations Materials and Setup Materials per Student Group 2 clear, resealable plastic bags, sandwich size 1/2 slice of turkey luncheon meat 1/2 tsp of meat tenderizer, or papaya enzymes (available at health food stores) Plastic, serrated knife Purchase meat tenderizer, located in the spice section at the grocery store, and a piece of sliced turkey luncheon meat for each group. Have students conduct this activity in groups of four. Have students wash hands before and after the activity. Clean work areas with disinfectant. Procedure and Extensions Session 1: Setting up Let Materials Managers collect 1/2 slice of turkey luncheon meat, a plastic knife and two resealable plastic bags. Have the groups label their bags “1” and “2.” Ask students, What happens to food when you eat it? Do you think that food stays the same inside your body? Discuss students’ ideas about digestion. Mention that they will be able to explore what happens to one kind of food—turkey meat (protein)—when digestion begins. Have the students in each group cut the piece of turkey in half and place one section in the bag labeled “1.” Direct them to place the other section in bag “2” and to add 1/2 teaspoon of meat tenderizer to that bag. Have them seal the bag and shake the turkey slice within the bag so that it is well coated with the tenderizer. Have the students place the bags to one side of the classroom for about an hour. (If students will be making observations the following day, refrigerate the bags to prevent spoilage.) Have students write, in their journals or on a sheet of paper, what they predict will happen to the slices of turkey. Session 2: Making observations Have students observe the texture and color of the meat samples without removing them from the plastic bags. Ask, Is there anything different about the turkey that was combined with the meat tenderizer? What do you think happened? Ask students to think about the changes they observed in the meat with tenderizer. Mention that the substance they added was a chemical that helps soften the muscle fibers in meat by beginning to break them down into smaller pieces. Help students understand that similar substances work within their stomachs and small intestines to break down the food they eat. Have students draw or write about their observations. Mention that turkey meat is a muscle. Help students understand that protein is the building block for muscles and that it is used inside each muscle cell. Protein that we eat must be broken into smaller components before it can be used by our bodies. You may want to mention that the chemical meat tenderizer also is a kind of protein. It provides another example of the variety of roles that proteins play inside plants and animals. Students can investigate the importance of chewing by repeating the experiment using a finely chopped piece of luncheon meat and comparing the outcomes. Students match foods with the appropriate food groups, and they learn about food labels, plants and photosynthesis, food as fuel for the body, and more. Students investigate food sources, food webs and food chains, healthy eating and food groups, food safety and overall nutrition. (11 activities) Rosie and Riff go undercover with Mr. Slaptail to discover why spinach is disappearing from Mr. Slaptail's garden. Funded by the following grant(s) My Health My World: National Dissemination Grant Number: 5R25ES009259 The Environment as a Context for Opportunities in Schools Grant Number: 5R25ES010698, R25ES06932
Throughout the Pathways to Success research project, participants were asked for suggestions or recommendations that would lead to increased success for immigrant youth in high school. The suggestions are organized by stakeholder group: youth, parents and family, schools, school boards, the provincial educational system and the community. They are the culmination of all the information gathered during the interviews, the focus group meetings, and the community forum. What immigrant youth in high school can do to improve their own success: - Patience and perseverance and most importantly, being unafraid to ask for help. Although the process may take some time and courage, immigrant youth must develop goals and reach out to others for the support they need to accomplish those goals. - Make friends strategically. Immigrant youth should associate with peers who share similar goals and values. They serve as a support group by helping the youth maintain focus and providing motivation when faced with challenges. - Get involved. Immigrant youth should explore the extracurricular activities available in their school. This will help with meeting new people and getting accustomed to the country’s culture. - Maintain Self-confidence. Immigrant youth should believe in their own skills and abilities when faced with challenges. Often overlooked, optimism is an important factor in succeeding in school. - Communicate with parents. Even though families are often stressed themselves, immigrant youth must preserve their connection with their parents. At home, they are their support group. What native students can do to help immigrant youth: - Being friendly and open-minded. Students should approach the newcomer, involve them as much as possible, and be empathetic about their experiences. Parents and Family: What parents and family members can do to get more involved with helping immigrant youth succeed in school: - Get involved: Parents should try to attend parent-teacher conferences regularly, talk to guidance counselors, join the parent council, or volunteer in the school. They must also take steps to inform themselves about their child’s education. - Encourage youths in school. Parents should inquire about how their child is doing in school, take an active interest in their child’s education and pass on to them the value of an education. - Be understanding. The research participants spoke about the difficulty parents have in accepting the changes in their children as a result of living in a new country and experiencing a different culture. Parents should try to be understanding about these changes and communicate with their children throughout this process. What individual schools can do to accommodate to immigrant youth: - Develop peer mentoring programs. Research participants discussed the benefits of matching the immigrant youth with another student who can understand their challenges. Mentors can reduce isolation and introduce the youth to new people and activities. - Increasing openness and understanding. Students, teachers and principals should take part in educational activities that can prepare them for a more diverse student population. They should be aware of their impact on immigrant youth and practice openness and understanding to make them feel comfortable. - Develop communication strategies and partnerships with parents. Parents may be excluded from their children’s high school experience when they are not aware of how they are to communicate with teachers and principals or if they face language barriers in communicating with them. Schools need to develop strategies that will enable parents to participate more and to inform them regularly about their children’s education. - Increase social opportunities for immigrant youth. Immigrant youth need to be aware of and have access to social opportunities with other students within the school. This will help them develop stronger social networks, understand the culture of their new country and orient them to the type of activities that are available to them. - Develop leadership opportunities. Immigrant youth must be encouraged to assume leadership roles to increase their involvement and opportunities within the school. This can also provide valuable growth and learning for both immigrant youth and native students and can increase full student representation in decision making. - Develop a welcoming, representative environment. Schools should be a place where all students feel represented and valued. Steps should be taken to incorporate these qualities into schools so that they are structurally, behaviorally and visually more welcoming. What school boards can do to make meaningful changes in the learning environment: - Hire qualified, quality teachers. Hiring practices should prioritize teachers who understand diversity issues and the various needs of their students, and teachers who are representative of the student population. Current teachers should be properly trained on diversity issues and be acknowledged and supported for their commitment to these efforts. - Increase multi-cultural training of teachers. Teachers are not always prepared for working with diverse populations, or properly educated on the issues and realities that accompany immigrant youth when they arrive in a foreign country. Mandatory training for teachers should be incorporated into all schools. - Increase subsidies and make them more available to immigrant youth. Make sure that immigrant youth are aware of subsidies for books and extra-curricular activities. - Provide orientation for the parents. An orientation will ensure that parents are well informed when their children enroll in school. Provincial Education System: What system-level changes should be made regarding the success of immigrant youth in high school: - Increase funding for partnerships between schools and community organizations: Partnerships between schools and community programs such as the YMCA settlement services are of great value to immigrant families. Strengthening these partnerships will benefit schools, families and communities. - Incorporate a more comprehensive ESL program: Research participants suggested changes to the current ESL curriculum, such as positioning grammar as a central component of their ESL learning priorities in order to improve their written and verbal skills. - Value quality education for all youth: Ideas about quality education should consider what each child requires in order to complete high school successfully. This should be an ongoing message within government and communities especially for youth who experience an “education gap” from going in and out of school in their home countries. Strategies should be explored so that these youth are not pushed out of the high school system before they are ready to leave. - Develop support for parents: There should be a designated contact that can ensure families get the information they need when they arrive. - Offer more support, programs, and time for immigrant families: After families arrive, they should be given more support in transitioning their children into school. What communities can do: - Be more welcoming: Communities should work to increase understanding of immigrants and to welcome them as an important part of the social, cultural and economic make-up of the community. - Acknowledge the potential and skills of immigrants: Communities should be open to the skills, abilities and credentials of immigrants and support them in finding employment. - Adapt to the changing population: Communities should be open, flexible and adaptive to the immigrant population. - Increase immigrant-friendly policies and representation of immigrants in the community: Immigrants should be adequately represented in decision-making roles in communities to ensure appropriate input into policies and other decisions that affect them. - Make immigrant youth aware of positive role models: Communities should help connect immigrant youth with positive role models that can provide them with mentorship and confidence in themselves. This research illustrates the complexity of the immigrant youth experience and how little is currently being done to accommodate their situation. Immigrant families generally have high expectations of what a foreign education can provide for them. Yet, there are many challenges preventing the fulfillment of these expectations. The active participation of schools, school boards, communities, etc. can greatly reduce this and consequently ease high-school drop-out rates and increase the success of high schools. by Betty Diop (Re:LIFE Writer/Columnist) B.A. Applied Psychology
Magnetic reconnection - Yenra Image credit: NASA Goddard/SWRC/CCMC/SWMF The explosive realignment of magnetic fields -- known as magnetic reconnection -- is a thought to be a common process at the boundaries of Earth's magnetic bubble. Magnetic reconnection can connect Earth's magnetic field to the interplanetary magnetic field carried by the solar wind or coronal mass ejections. NASA's Magnetospheric Multiscale, or MMS, mission studies magnetic reconnection by flying through the boundaries of Earth's magnetic field. In December 2015, just under four months into the science phase of the mission, NASA's Magnetospheric Multiscale, or MMS, is delivering promising early results on a process called magnetic reconnection -- a kind of magnetic explosion that's related to everything from the northern lights to solar flares. The unprecedented set of MMS measurements will open up our understanding of the space environment surrounding Earth, allowing us to better understand what drives magnetic reconnection events. These giant magnetic bursts can send particles hurtling at near the speed of light and create oscillations in Earth's magnetic fields, affecting technology in space and interfering with radio communications.
The Structure of the Earth The Earth has a layered structure including the core, mantle and crust. The crust and upper mantle are cracked into large peices called tectonic plates. These plates move slowly but can cause earthquakes and volcanoes where they meet. The earths crust, its astmosphere and oceans are the only sources of the resources that humans need. The earth is almost a sphere. There are its main layers, starting with the outermost - Crust (relatively thin and rocky) - Mantle (has the properties of a solid, but can flow very slowly - Core (made from liquid nickel and iron) The radius of the core is just over half the radius of the earth. The core itself consists of a solid inner core an a liquid outer core. The earths atomostphere surrounds the earth. The earths crust and upper part of the mantle are broken into large peices called tectonic plates. These are constantly moving at a few centimetres each year. Over millions of years the movement allows whole continents to shift thousands of kilometres apart. This process is called continental drift. - The plates move because of covection currents in the earths mantle. - These are driven by the heat produced by the natural decay of radiocactive elements in the earth . - Where tectonic plates meet, the earths crust becomes unstable as the plates push against each other, or ride under or over each nother. - Earthquakes and volcanic erruptions happen at the boundries between plates, and the crust may 'crumble' to form mountain ranges. - It is difficult to predict exactly when an earthquake might happen and how bad it will be, even in places known for having earthquakes. - The theory of plate tectonics and continental drift was proposed at the beginning of the last century by a german scientist, Alfred Wegener. - Before Wegener developed his theory, it was thought that mountains formed because the earth was cooling down, and in doing so contracted - This was believed to form wrinkles, or mountains in the earths crust - If this idea was correct, mountains would be spread evenly over the earths surface - Wegener suggested that mountains were formed when the edge of a drifiting continent collided with another, causing it to crumble and fold. - For example, the himalayas were formed when India came into contact with Asia. - It took more than 50 years for Wegeners theory to be accepted. - One of the reasons that is was difficult to work out how whole continents could move: It was not until the 1960s that enough evidence was discovered to support the theory fully Volcanoes and Earthquakes - Two types of Tectonic Plates - Oceanic plates occur under the oceans - Continental plates form the land - Oceanic plates are denser than continental plates. They are pushed down underneath continental plates if they meet - Where tectonic plates meet, the earths crust becomes unstable as the plates slide past eachother, push against eachother or ride under or over one another. - Earthquakes and volcanic eruptions happen at the boundries between plates. - Magma (Molten rock) is less dense than the crust. It can rise to the surgace through weaknesses in the crust forming a volcano - Geologists study volcanoes to try to predict future eruptions. - Volcanoes can be very destructive, but some people choose to live near them because volanic soil is very fertile The movement of tectonic plates can be sudden and disastrous, causing an earth quake. Difficult to predict when and where an earthquake will happen. Energy Transfer by Heating - Heat can be transferred from place to place by conduction, convection and radiation. - Dark matt surfaces are better at absorbing heat energy than light shiny surfaces. - Heat energy can be lost from homes in many different ways and there are ways of reducing these heat losses. The Modern Atomsphere The earths atmosphere has remained much the same for the past 200 million years. - The two main gases are both eleemtns and account for about 99 percent of the gases in the atmosphere. - About 4/5 or 80% nitrogen (a relatively unreactive gas) - About 1/5 or 20% oxygen (the gas allows and animals and plants to respire and for fuels to burn) - The remaining gases, such as carbon dioxide, water vapour and noble gases such as argon are found in much smaller proportions Oxygen in the Air The percentage of oxygen in the air can be measrued by pasing a known volume of air over hot copper and measuring the decrease in volume as the oxygen reacts with it/ - copper + oxygen > copper oxide - 2Cu +O2 > 2CuO Gas syringes are used to measure the volume of gas in the experiment. The starting volume of air used is often 100cm3 to make the analysis of the results easy, but it could any convinent volume. Note that there is some ait in the tube with the copper turnings. The oxygen in this air will also react with hot copper, causing a small error in the final volume recorded. It is also important to let the appartatus cool down at the end of the experiment otherwise the final reading will be to high The Early Atmosphere Scientists believe that the earth was formed about 4.5 billion years ago. Its early atmosphere was probably formed from the gases given out my volcanoes. It is believed that there was intense volcanic activity for the first billion years of the earths existense. The early atmosphere was proably mostly cabron dioxide with little or no oxygen. There were smaller proportions of water vapour, ammonia and methane. As the earth cooled down, most of the water vapour condensed and formed the oceans. Mars and Venus today It is thought that the atmosphere of Mars and Venus today, which contain mostly carbon dioxide, are similar to the early atmosphere of the earth. - Carbon Dixiode - 95.3 96.5 - Nitrogen - 2.7 3.5 - Argon- 1.6 Trace - Oxygen, water vapour and other gases - Trace Trace - Green = Venus - Red = Mars Life on Earth There is evidence that the first living things appeared on Earth billions of years ago. There are many scientific theoryies to explain how life began. One theory involves the interaction between hydrocarbons, ammonia and lightning. The Miller-Urey experiment Stanley Miller and Harold Urey carried out some experiments in 1952 and published their results in 1953. The aim was to see if substances now made by living things could be formed in the conditions thought ot have existed on the early earth. - Two scientists sealed a mixture of water, ammonia, methane and hydrogen in a sterile flask. - The mixture was heated to evaporate water to produce water vapour. - Electric sparks were passed through the mixture of water vapour and gases, simulaiting lightning - After a week, contents were analysed - Amino acids, the building blocks for protiens were found - The Miller-Urey experiment supported the theory of a 'primordial soup', the idea that complex chemicals needed for living things to develop could be produced naturally on the early earth Oxygen and Carbon Dioxide The earths early atmosphere is believed to have been mainly carbon dioxide with little or no oxygen gas. The earths atmophsere today contains around 21% oxygen and 0.04% carbon dioxide. SO how did the proportion of cabron dioxide in the atmosphere go down, and the proportion of oxugen go up? Plants and algae carry out photosynthesis. This process used cabron dioxide from the atmosphere (with water and sunlight) to produce oxygen (and glucose). The appearance of plants and algae caused the production of oxygen, which is why the proportion of oxygen went up. Decreasing carbon dioxide Photosynthesis by plants and algae used carbon dioxide from the atmosphere, but this is not the only reason why the proportion of carbon dioxide went down. - Dissolving in the oceans - The production of sedimentary rocks such as limestone - The production of fossil fuels from the remains of dead plants and animals Oxygen and Carbon Dioxide 2 Today, the burning of fossil fuels (coal and oil) is adding carbon dioxide to the atmosphere faster than it can be removed. This means that the level of carbon dioxide in the atmosphere is increasing, contributing to global warming. It also means that the oceans are becoming more acidic as they dissolve increaing amounts of carbon dioxide. This has an impact of the marine environment, for example making the shells of sea creatures thinner than normal Fractional distillation of liquid air 78% of the air is nitrogen and 21% is oxygen. These two gases can be seperated by fractional distillation of liquid air. Liquefying the air Air is filtered to remove dust, and then cooled in stages until it resched -200 degrees. At this temperature it is a liquid. Liquefied. The liquefied air is passed into the bottom of a fractionating column. Just as in the columns used to seperate oil frations, the column is warmer at the bottom than it is at the top. - The liquid nitrogen boils at the bottom of the column. - Gaseous nitrogen rises to the top, which it is piped off and stored. - Liquid oxygen collects at the bottom of the column the boiling point of argon- the noble gas that forms 0.9% of air- is close to the boiling point of oxygen so a second fractionating column is often used to seperate argon from the oxygen Fractional distillation of liquid air 2 Uses of Nitrogen and Oxygen - Liquid nitrogen is used to freeze food - Food is packaged in gaseous nitrogen to increase its shelf life - Oil tankers are flushed with gaseous nitrogen to reduce the chance of explosion - Oxygen is used in the manufature of steel and in medicine
| There have been only scattered observations of the Insular Vole since 1885, because the two islands in the Bering Sea off the coast of Alaska where it lives are rather inaccessible. The Voles live in burrows dug in moist lowland areas, at lower elevations on mountain slopes, or on beach ridges where rye grass grows. They feed during the day on plant matter. Nests within their burrow systems have been found to contain dried grasses and roots. The only other mammal that lives on the islands is the Arctic Fox, although Polar Bears visit from time to time. Arctic Foxes and several species of birds prey on the Voles. Also known as: St. Matthew Island Vole Miller, 1899, Proceedings of the Biological Society of Washington, 13:13. Mammal Species of the World (opens in a new window).
Activated in 1917, the 86th Infantry Division served in France during World War I. During World War II, the "Blackhawk" division arrived in France in March 1945. It quickly proceeded to Germany, where it took part in the fierce fighting in the Ruhr area. It then was ordered to move southward and crossed the Danube River on April 27, 1945, advancing into Austria. As the 86th advanced into the Ruhr region, the troops discovered the Attendorn civilian forced-labor camp on April 11, 1945. The camp had been established to provide labor to area factories and it housed up to 1,000 conscripted Polish, Soviet, and Czech laborers. The 86th Infantry Division was recognized as a liberating unit by the US Army's Center of Military History and the United States Holocaust Memorial Museum in 1996. Casualty figures for the 86th Infantry Division, European theater of operations Total battle casualties: 785 Total deaths in battle: 161 The 86th Infantry Division developed the blackhawk as its insignia during World War I, to honor the Native American warrior of that name who fought the US Army in Illinois and Wisconsin during the early nineteenth century. The nickname "The Blackhawks" or "Blackhawk" division is derived from the insignia.
Python Error Handling Files You have already seen how to do that with str: >>> f.write (str(12.3)) >>> f.write (str([1,2,3])) The problem is that when you read the value back, you get a string. Most of the time, you read the whole book in its natural order, but you can also skip around. if we don't have the permission to read it, we get the following message: I/O error(13): Permission denied An except clause may name more than one exception in a tuple of With no arguments, it reads the entire contents of the file: >>> text = f.read() >>> print text Now is the timeto close the file There is no space between "time" http://vealcine.com/in-python/python-io-error-handling.php Exceptions 8.3. A format sequence can appear anywhere in the format string, so we can embed a value in a sentence: >>> cars = 52 >>> "In July we sold %d cars." % result = x / y ... For example: >>> class MyError(Exception): ... check that Python Exception Message But the file doesn't exist, so this raises the IOError exception. The name "exception" in computer science has this meaning as well: It implies that the problem (the exception) doesn't occur frequently, i.e. This will motivate you to write clean, readable and efficient code in Python. - You already know about different kinds of file , like your music files, video files, text files. - It is useful for code that must be executed if the try clause does not raise an exception. - print "result is", result ... - print(inst) # __str__ allows args to be printed directly, ... # but may be overridden in exception subclasses ... - raise ValueError("That is not a positive number!") ... - If the function that called inputNumber handles the error, then the program can continue; otherwise, Python prints the error message and exits: >>> inputNumber () Pick a number: 17 ValueError: 17 Here the letter d stands for "decimal": >>> cars = 52 >>> "%d" % cars '52' The result is the string '52', which is not to be confused with the integer AttributeError and TypeError are bugs/programming errors - we don’t want to catch them at all - when they happen (if if they happen to a user), they indicate bugs, and we A simple example to demonstrate the finally clause: try: x = float(raw_input("Your number: ")) inverse = 1.0 / x finally: print("There may or may not have been an exception.") print "The Python Try Except Else Files are usually stored on a hard drive, floppy drive, or CD-ROM. You cannot use / as part of a filename; it is reserved as a delimiter between directory and filenames. Syntax For Generic Except Clause In Python This clause is executed no matter what, and is generally used to release external resources. This is true for all built-in exceptions, but need not be true for user-defined exceptions (although it is a useful convention). Now try to write code which will open the file in read only mode and then read the file line by line and find out the number of CPU(s). Handling an exception If you have some suspicious code that may raise an exception, you can defend your program by placing the suspicious code in a try: block. Is Nested Try Block Possible In Python The preceding part of the error message shows the context where the exception happened, in the form of a stack traceback. print('x =', x) ... A third example: an input error - a file containing an unexpected value. Syntax For Generic Except Clause In Python FloatingPointError Raised when a floating point calculation fails. original site Finally clauses are called clean-up or termination clauses, because they must be executed under all circumstances, i.e. Python Exception Message class Networkerror(RuntimeError): def __init__(self, arg): self.args = arg So once you defined above class, you can raise the exception as follows − try: raise Networkerror("Bad hostname") except Networkerror,e: print e.args Previous Python Raise Custom Exception It's possible to "create custom-made" exceptions: With the raise statement it's possible to force a specified exception to occur. Handling Exceptions 6.1.1. http://vealcine.com/in-python/python-file-write-error-handling.php tabs %d. Sometimes an exception is really because you have a bug in your code (like accessing a variable that doesn't exist), but many times, an exception is something you can anticipate. Exceptions are everywhere in Python. Python Print Exception An else block has to be positioned after all the except clauses. def inputNumber () : x = input ('Pick a number: ') if x == 17 : raise ValueError, '17 is a bad number' return x The This is not an issue in simple scripts, but can be a problem for larger applications. navigate here You don't need to know or care which platform your code is running on -- just call getpass, and it will always do the right thing. The original type information has been lost. Name Of Errors In Python To put data in the file we invoke the write method on the file object: >>> f.write("Now is the time") >>> f.write("to close the file") Closing the file tells the system a "finally" clause is always executed regardless if an exception occurred in a try block or not. x = 1/0 ... >>> try: ... For example, dividing by zero creates an exception: >>> print 55/0 ZeroDivisionError: integer division or modulo So does accessing a nonexistent list item: >>> a = >>> print a IndexError: Please try again ...") print "Great, you successfully entered an integer!" It's a loop, which breaks only, if a valid integer has been given. If there is no file named test.dat, it will be created. An Exception Can Be In Python List of Standard Exceptions − EXCEPTION NAME DESCRIPTION Exception Base class for all exceptions StopIteration Raised when the next() method of an iterator does not point to any object. else: If there is no exception then execute this block. raise Exception('spam', 'eggs') ... This is true for all built-in exceptions, but need not be true for user-defined exceptions (although it is a useful convention). his comment is here print type(inst) # the exception instance ... The flow of execution moves to the top of the loop, checks the condition, and proceeds accordingly. The only way to get out of the loop is to execute break, which happens when text is the empty string, which happens when we get to the end of the In the try block, the user-defined exception is raised and caught in the except block. else: Rest of the code here... without catching exceptions This typical python code: #!/usr/bin/env python import sys a = open("/non/existing/file","r") Will result in this output: $ In fact, you can't even tell where one value ends and the next begins: >>> f.readline() '12.3[1, 2, 3]' The solution is pickling, so called because it "preserves" data structures. For example: >>> def this_fails(): ... As an exercise, write a function that uses inputNumber to input a number from the keyboard and that handles the ValueError exception. 11.6 Glossary file A named entity, usually stored on [email protected]:~/tmp$ python finally2.py Your number: 0 Infinity There may or may not have been an exception. Disclaimer: I’m not a python fan (and certainly not an expert). Built-in Exceptions lists the built-in exceptions and their meanings. 8.3. Exceptions come in different types, and the type is printed as part of the message: the types in the example are ZeroDivisionError, NameError and TypeError. Navigation index modules | next | previous | Python » 2.7.12 Documentation » The Python Tutorial » 8. There are (at least) two distinguishable kinds of errors: syntax errors and exceptions. 8.1. That was no valid number. Classes This Page Report a Bug Show Source Navigation index modules | next | previous | Python » 3.5.2 Documentation » The Python Tutorial » | © Copyright 2001-2016, Python Software How to tell where file is going to be saved? So if an exception occurs between the try block containing the call to open and the with statement, the file doesn't get closed. The first argument is the name of the original file; the second is the name of the new file: def copyFile(oldFile, newFile): f1 = open(oldFile, "r") f2 = open(newFile, raise To signal an exception using the raise statement. An expression is tested, and if the result comes up false, an exception is raised. this_fails() ... SystemError Raised when the interpreter finds an internal problem, but when this error is encountered the Python interpreter does not exit.
Symbiosis and Predation in the World of Insects — Using Film Clips From Microcosmos Subject: Science/Biology (Symbiosis & Predation; Ants, Aphids, & Ladybugs) Ages: 5+; Elementary - High School Length: Film Clips: 25 minutes; Lesson:one 45 - 55 minute class period. Excerpts from the Complete Snippet Lesson Plan Learner Outcomes/Objectives: Students will understand and retain striking images of symbiosis and predation in the world of insects. Rationale: Symbiosis and predation are important concepts of biology. Seeing them in action will help students understand and remember these concepts. Description of the Film Clips: This Snippet Lesson Plans contains four examples of symbiosis and predation in the insect world: (1) bees pollinating plants; the stamens of the flowers actually move to deposit pollen onto the bees; (2) a ladybug that is eating aphids is driven away by the ant which tends the aphids; the ant then strokes the aphids and harvests their honeydew; (3) grasshoppers are caught and eaten by a spider; (4) flying insects are caught by carnivorous plants. USING THE FILM CLIPS IN THE CLASSROOM 1. Review the clips and to make sure they are suitable for the class. Review the Lesson Plan and decide how to present it to the class, making any necessary modifications. 2. Retrieve from the Internet the additional film segments and photographs recommended below. Determine which are appropriate for the classes which will see the snippet. Step by Step 1. Tell students that this class will be about symbiosis and predation. For students in the lower grades, define those terms. Tell students in the lower grades or elicit from class discussion the facts that bees pollinate more than flowers and that much of our food depends upon bees pollinating fruit trees, vegetable plants, and beans. Ask students to think about what the world would be like without oranges, apples, tomatoes and other fruits and vegetables. [The Snippet Lesson Plan then contains suggestions for introductions and conclusions for each film clip and interesting supplemental materials containing interesting examples of symbios.] 2. Start playing the film from DVD Scene 1. TWM recommends playing the film . . . . TeachWithMovies.com's Movie Lesson Plans and Learning Guides are used by thousands of teachers in their classrooms to motivate students. They provide background and discussion questions that lead to fascinating classes. Parents can use them to supplement what their children learn in school. Each film recommended by TeachWithMovies.com contains lessons on life and positive moral messages. Our Guides and Lesson Plans show teachers how to stress these messages and make them meaningful for young audiences. Snippet Lesson Plans are based on short subjects or film clips. They are ideal for classroom use because the video segments are less than 40 minutes in length. Some Snippet LPs simply identify film clips and Internet resources. Others are complete lesson plans with introductions, handouts, discussion questions, and summative assessments. Each TWM Snippet Lesson Plan Contains: - Learner Outcomes/Objectives - Exact Location of the Clip in the Movie, Film or Video - Step-by-Step Instructions for Using the Clip in the Classroom Learning Guides help teachers develop or improve their own lesson plans to maximize students' classroom experience. Many also feature introductions, handouts, and summative assessments. Learning Guides Feature the Following Sections: - Possible Problems - Helpful Background - Building Vocabulary - Discussion Questions - Links to Internet - Bridges to Reading - Assignments & Projects $1 per month ($11.99 per year) for Lesson Plans and Learning Guides to hundreds of films. SUPPLEMENT SCHOOL CURRICULUM! PROMOTE SOCIAL-EMOTIONAL LEARNING! More suggestions about the beneficial use of movies in the classroom and to supplement curricula are added on a regular basis! The film clips selected from Microsmos provide students withinteresting examples of symbiosis in the world of insects. A subscription to TeachWithMovies.com will give teachers access to 350 Snippet Lesson Plans, Learning Guides, and Movie Lesson Plans. Subscribe Today and inspire your classroom with TWM's Snippet Lesson Plan on Symbiosis and Predation in the World of Insects Using Film Clips From Microcosmo Already a Member? Login Here
However, today’s story comes from a team of chemical engineers who are working to create squishy robots by designing a synthetic gel. The team, from the University of Pittsburgh‘s Swanson School of Engineering, US, have developed a computational model which has allowed them to design a new material. The material has the ability reconfigure its shape and move using its own internally generated power. This ability to change was seen as a catalyst for the development of a soft robot. This research, undertaken by Dr Anna C. Balazs, Professor of Chemical and Petroleum Engineering and Dr Olga Kuksenok, Research Associate Professor, uses a single-celled organism, Euglena mutabilis, as a model. E. mutabilis is able to process energy to expand and contract its shape. This results in movement. “Movement is a fundamental biological behaviour, exhibited by the simplest cell to human beings. It allows organisms to forage for food or flee from predators. But synthetic materials typically don’t have the capability for spontaneous mechanical action or the ability to store and use their own energy, factors that enable directed motion” Anna said. She continued: “Moreover in biology, directed movement involves some form of shape changes, such as the expansion and contraction of muscles. So we asked whether we could mimic these basic interconnected functions in a synthetic system so that it could simultaneously change its shape and move.” This work has been published in the journal Scientific Reports under the title: ‘Designing Dual-functionalized Gels for Self-reconfiguration and Autonomous Motion’. To mimic mobility of E. mutabilis‘, Anna and Olga looked at polymer gels containing spirobenzopyran (SP). This material that can morph into different shapes by using light. The material, Belousov-Zhabotinsky (BZ) was also studied. BZ is a material that undergoes periodic pulsations and can be driven to move in the presence of light. Olga explained: “The BZ gel encompasses an internalized chemical reaction so that when you supply reagents, this gel can undergo self-sustained motion. “Although researchers have previously created polymer chains with both the SP and BZ functionality, this is the first time they were combined to explore the ability of “SP-BZ” gels to change shape and move in response to light.” Anna and Olga’s work has managed to incorporate both the ability of SP gels to change shape with light and BZ gels’ mechanical actions. According to Anna, there were unexpected results during their research: “Uniform light exposure doesn’t work. We had to place the light at the right place in order for the gel to move. And if we change the pattern of the light, the gel displays a tumbling motion. “We also found that if we placed the SP in certain regions of the BZ gel and exposed this material to light, we could create new types of self-folding behaviour.” Anna thinks these SP-BZ gels could enable the creation of small-scale soft robotics for microfluidic devices that can help carry out multi-stage chemical reactions. She said: “The next push in materials science is to mimic these internal metabolic processes in synthetic materials, and thereby, create man-made materials that take in energy, transform this energy and autonomously perform work, just as in biological systems.” The advantage of using polymer gels instead of metals to make soft robots greatly reduces their mass and thus improves the potential to complete a range of motions. “To put it simply, in order for a robot to be able to move more autonomously in a more biomimetic way, it’s better if it’s soft and squishy,” Olga says. “It’s ability to grab and carry something isn’t impeded by non-flexible, hard edges. You’d also like its energy source incorporated into the design so that it’s not carrying that as extra baggage. The SP-BZ gel is pointing us in that direction.” I would like to congratulate the team on their excellent research and look forward to seeing squishier robots in the future!
Descriptive Crystallography for Gemologists Table of Contents Crystal Systems Review When crystals form, their atoms and molecules lock together in period arrays, much like three-dimensional wallpaper patterns. These arrays have various types of symmetry. Gemologists classify them into six major crystal systems: Some mineralogists consider the trigonal subclass of the hexagonal system as a seventh crystal system. Each crystal system is defined in terms of crystal axes and angles. - Crystal axes are imaginary lines in space between the sides of the crystals. They intersect at a common point. Their lengths may be described as equal or unequal to each other. - The crystal axes intersect each other at various angles. The angles further describe the crystal systems. Descriptions of Crystal Structures Crystallography uses additional descriptive terms to explain the crystal structures exhibited by various mineral species. These include the following. Crystals that form in a prismatic structure have well-developed, elongated, prism-like crystal faces. A bladed crystal has slender and flattened blade-like formations rather than prism-like faces. Acicular crystal formations feature slender, possibly tapered, needle-like crystals. Filiform crystals are hair-like and extremely fine. Equant crystals have lengths, widths, and breadths roughly equal in size. Sometimes, equant crystals are referred to as stout crystals. Crystals that form pyramidal structures resemble single or double pyramids. Tabular formations feature a tablet shape with crystals slightly longer than wider. You may encounter other terms for describing a mineral’s appearance, such as octahedral (8-sided) and pyritohedral (12-sided). However, these terms are typically used in conjunction with a specific crystal system. Descriptions Based on Aggregation States Crystallography uses more terms to describe crystals based on their aggregation states. These terms include the following. A solid, chunky aggregate. A dense, solid aggregate. Denotes a crystalline mass that can be cleaved. Comprised of a mass of compact grains. Describes aggregates that resemble stalactites. Aggregates comprised of masses of spherical grains. Aggregates made of masses of densely packed powder. Gem Formation and Descriptive Crystallography A mineral’s growth process and formation environment largely determines its appearance. For example, minerals that form in sedimentary environments tend to be earthy, stalactitic, oolitic, and sometimes massive. On the other hand, igneous minerals tend to be crystalline or massive, sometimes cleavable. Although these terms are somewhat subjective, they serve to give gemologists a mental image of a mineral’s appearance as it occurs in the Earth.
Dogs poop according to Earth's magnetic fields New study reveals this bizarre conclusion You may have seen the study which came out a while back, about how cows grazing in the field align themselves according to the Earth's magnetic field. Someone studying satellite imagery of cattle discovered this strange quirk when they noticed that in most aerial photos of grazing cows, the animals are lined up north to south, usually with their heads facing north. It turns out that the same is true for dogs. Except when they poop. If you have ever watched a dog getting ready to poop, you may have noticed it circling and circling with a fretful expression, as if it's trying to find the exact right spot to poop. You may have noticed what constitutes "the exact right spot to poop," for a dog. Well, the answer may be that your dog is mentally feeling around for the Earth's magnetic fields, and trying to align itself parallel to these invisible lines of force, with its head facing north. It's not too big a stretch to suggest that dogs can sense magnetic polarity. Many animals are known to use a magnetic sense to guide them on long journeys, like migrating birds, and caribou on their annual trek to the Arctic. Dogs are the descendants of a pack animal which ranged across long distances, so it would make sense if they had some of this homing ability. However, suggesting that they use it to line themselves up when they poop is certainly a novel conclusion. Such was the result of a paper recently published by a team of Czech and German researchers who sampled hundreds of dogs performing several thousand actions. The dogs were categorized by breed and gender, and their pooping orientation analyzed. One way in which this research went farther than past studies is that the researchers measured the strength of the magnetic field at the time of the observation. This was reportedly done to counteract the effect of magnetic field scattering, a common issue especially in urban areas. It's the researchers contention that MF scattering is the answer as to why previous studies have not found a correlation between dog behavior and magnetic north. This is an interesting hypothesis, and a reasonable assumption. However, it also leads to the possibility that selection bias could be taking place, if the researchers are throwing out some data. This would be a great study for kids to replicate as a science project!
Learn something new every day More Info... by email A court of common pleas is a court in the United States that handles civil trials at the state level. These courts are in the minority in the U.S., because their function usually is performed by superior courts or trial courts. Courts of common pleas derive their basic structure — and their name — from the English common law system as it was in force when the U.S. was a British colony. The United Kingdom abolished its court of common pleas system during the 1800s, and most U.S. states followed suit about that same time. As of late 2011, only four U.S. states operated courts of common pleas: Pennsylvania, Delaware, Ohio and South Carolina. At the time of the original English settlement in the early 1600s, all courts in the U.S. were actually English courts, because the states were considered Crown Colonies. As such, they followed the English court system’s design in both form and function. England’s court system during colonial times was divided into two main pieces: the King’s Bench and the Common Bench. The King’s Bench heard cases involving the King, usually instances of treason or violation of national laws. By contrast, the Common Bench, which was also known as the Court of Common Pleas, dealt with disputes between citizens. The national government was not a party to these disputes and had no vested interest in the outcome. The United Kingdom court system no longer supports courts of common pleas. These courts were merged into the King’s Bench in 1873. There still is a difference between claims brought between citizens and claims brought by the government against a citizen or private entity, but there are not separate court systems for each — just different methods of hearing the claims. Civil cases usually are heard in magistrate’s courts, which serve as the bottom rung on a ladder of ascension through the court system. Most U.S. states made a similar change around the same time. Courts of common pleas generally became superior courts or trial courts and were absorbed into the larger state courts systems as they developed. Like the U.K. magistrate’s courts, these courts now serve as courts of primary jurisdiction for a range of civil matters, such as family disputes and business conflicts. Trial judges will hear the disputes then issue rulings that can be appealed all the way to state supreme courts and sometimes even to the highest U.S. court, the Supreme Court. In this way, the trial courts are connected to the larger national court system, albeit at a lower, more introductory level. The four states that maintain a court of common pleas do so more in name than in actual function. These courts do not work like an early English court of common pleas would have, in the sense that they are not divorced from the state’s other judicial branches. They maintain their name largely out of tradition, and they function, in most cases, just as a superior or trial court would. United States law permits different states to order their courts independently, but all of them follow a similar pattern of jurisdiction and appeals. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
Home > Preview The flashcards below were created by user on FreezingBlue Flashcards. A disease that is caused by infection or one that is capable of being transmitted Any disease that can be spread from person to person or from animal to person. The invasion of a host or host tissues by organisms such as bacteria, viruses, or parasites, /S of disease. A microogranism that is capable of causing disease in a susceptible host. The way in which an infectious agent is spread. Name and describe the routes of transmission. - Contact Transmission - Direct Contact: Physical contact with infected person - Indirect Contact: Contact with a contaminated object - Airborne: Spread in aerosol form - Foodborne: Contaminated food or water - Vector-borne: Spread by insect or animal Inflammation of the liver, usually caused by a virus, that causes fever, loss of appetite, jaundice, fatigue, and altered liver function. Pathogenic microoragnisms that are present in human blood and can cause disease in humans. The federal regulatory compliance agency that develops, publishes, and enforces guidelines concerning safety in the workplace. Occupational Safety and Health Adminiostration (OSHA) The presence of infectious organisms on or in objects such as dressings, water, food, needles, wounds, or a pt. body. The primary federal agency that conducts and supports public health activities in the U.S. Centers for Disease Control and Prevention (CDC) Protective measures that have traditionally been developed by the CDC for use in dealing with objects, blood, body fluids, or other potential exposure risks of communicable diseases. The person in the department who is charged with the responsibility of managing exposures and infection control issues. Protective equipment that OSHA requires to be made available to EMS providers. Personal Protective Equipment (PPE) A chronic bacterial disease that usually affects the lungs but can also affect other organs such as the brain and kidneys. A situation in which a person has had contact with blood, body fluids, tissues, or airborne particles that increases the risk of disease transmission. Procedures to reduce transmission of infection among pt. and health care personnel. The body's ability to protect itself from aquiring a disease. The number of injured people. Usually expressed as a reate, meaning the number of nonfatal injuries in a certain population in a given time period devided by the size of the population. The tactical use of a impenetrable barrier to conceal EMS personnel and protect them from projectiles. The use of objects such as shrubs and bushes to limit a person's visibility of you. Name the body's three-stage response to stress and the three stages. - General Adaptation Syndrome - Alarm Response - Reaction and Resistance - Recovery or Exhaustion A reaction to stress that occurs during a stressful situation. Acute Stress Reaction A reaction to stress that occurs after a stressful situation. Delayed Stress reaction Prolonged or excessive stress. Cumulative Stress Reaction A delayed stress reaction to a previous incident. Characterized by reexperiencing the event and over responding to stimuli that recalled the event. Posttraumatic Stress Disorder (PTSD) A process that confronts responses to critical incidents and defuses them. Critical incident Stress Management (CISM) Term for good stress Term for bad stress Name the body's three types of fuel. Name the five stages of the grieving process. - Anger, Hostility
Surprisingly, the natural folds and toughs of the human brain are uncommon in the rest of the animal kingdom, only shared by dolphins, some primates and a few other animals. Scientists have understood for years why the human brain is folded, but have up until recently had no idea how a fetal brain folds. [Image Source: Harvard] From an anthropological viewpoint, it is advantageous to have a folded brain, as this means a greater surface area accompanied by less distance between cells. The folds of the brain start developing in the 20th week of pregnancy, and continue until a child is about a year and a half. There have been many ongoing theories of how the human brain develops, but none have been able to be scientifically tested, until now. Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences along with scientists from France and Finland have found a method to demonstrate their theory of how a folded brain develops. A realistic gel model of a fetal brain based on MRI images was developed, then coated in a thin layer of an elastomer analog. This outer layer of elastomer gel symbolizes the cortex, with the inner layer serving as the basic brain structure. The entire brain model was then placed in an aqueous solution, allowing the outer gel (cortex) to absorb the solvent. Rapid physical expansion occurred in the outer layer creating mechanical compression forces, which creates folding. [Image Source: Harvard] It was also noted that the initial geometry of the brain structure is important as it orients the mechanical folds. Main areas of the brain are defined through this essential brain folding, being one of the main reasons this finding is so important. “Brains are not exactly the same from one human to another, but we should all have the same major folds in order to be healthy," said researcher Jun Young Chun. Through the study of the brains folding, scientists may be able to predict developmental disorders and determine how specific brain structure is related to human progression. On top of creating a physical model of a 22-week-old fetal brain, researchers designed a computer model to demonstrate the same principles, seen below. The folds seen in the fetal brain model aren't random either, in fact, they closely resemble different regions of a real human brain. It is important to note that this test only proves theories of how the brain folds and does not serve as an example of how a brain grows in size during fetal growth. From here, the research can transition into a deeper study of the dynamics of the human brain, possibly leading to further advances. The researchers who carried out the test are still incredibly shocked at how well their model resembled an actual brain in both growth and structure.
The first introduction to psychology normally comes in the kind of biology classes. Many biology students already come into class with at least basic understanding of psychology. They know that their genes determine how their bodies work, how they function and, to a certain degree, how they act or what illnesses they may develop. But very few of these students have an understandable comprehension of what exactly DNA is, where it is found in the body, why it causes problems, and how it can be manipulated or altered. In the case of development, the genes passed from one generation to the next only need to survive. Genes are merely instructions for doing things. People, as all living things, are programmed through thousands of years of natural selection to engage in behavior that’s survival oriented. The foundation for this programming is the expression of specific genes that cause specific traits, such as aggressiveness, violence or sexuality. In the case of psychology, the genes that are passed on to us through our parents, grandparents, or other kin will determine such behavior. Concerning understanding what is going on genetically, we’re still in the age of molecular biology. In this framework, genes are simply packets of information carrying directions. This is the way humans, plants and animals have been evolving for centuries. Nevertheless, in the past 50 years or so, a revolution in the field of psychology has happened known as molecular biology or genomics. Genomics provides a new lens through which we can see the relationships between behavior and genes. The molecular basis for human and behaviors memory is actually quite simple – it is all about the epigenome. The Epigenome is a mobile memory storage that determines whether or not a behavior will be expressed or not. Like all memory storage systems, it contains information that is “programmed” in advance by the genome. What we now know is that the genetic material that determines behaviour exists in all of us, but in varying quantities. Most of the variations come from the variation in the copies of genes inside the cellular memory storage of the person. The copy of the gene which determines the behaviour is known as the epigome. It’s this specific copy that we call the epigenome. The importance of the epigenome in psychology and its relationship to individual differences was revealed in a landmark study on twins. For many years, autism research was based upon research on twins. However, it was discovered that there was substantial heritability (hitability) to behavior that existed between people who had identical twins but whose traits were quite different. This study provided the first evidence of the significance of the epigenome in human behaviour and its connection to abnormal behavioral disorders like autism. Even though the significance of the Epigenome in psychology was established, many in the emotional field are reluctant to accept its potential as a significant element in mental illness. One reason for this is that it is difficult to define an actual genetic sequence or locus that leads to a behavioral disorder. Another problem is that there are simply too many genetic differences between individuals to use a single DNA sequence to determine mental illness. Finally, even though the study on the Epigenome has been promising, more work needs to be done to find out the role that genetics play in complex diseases such as schizophrenia. If this finding holds true, it may be used as a foundation for analyzing other complex diseases that have complicated genetic elements. If you’re interested in learning more about Epigenetics and how it applies to psychology, I strongly advise that you follow the links below. My site discusses the exciting new technologies that are available today to better understand how Epigenetics affects behavior and the susceptibility to disease. You can also hear me speak on my epigenetics and autism blog. My research into Epigenetics is centered on understanding the environmental causes of disease, but I have also been involved in analyzing the relationship between Epigenetics and Autism. My future posts will also discuss diseases of the mind that can be impacted by Epigenetics.
What you'll learn - What were the foreign models for a Chinese republic? - How the modern Chinese state was built on the ruins of the previous empire. - The impact of China’s war against Japan. - How China’s relationship with the U.S. follows established patterns. - The role of leaders like Sun Yat-sen, Chiang Kai-shek, and Mao Zedong. What does it mean to be modern? What constitutes modern politics, modern institutions, a modern military, and modern infrastructure? In this period of great excitement and experimentation, the country is asking itself: How do you become modern and remain true to the Chinese national identity? This course will explore enduring issues around Chinese modernity, with a focus on the creation of the modern Chinese state during the Republican era. You’ll learn about China’s war against Japan, about long-term patterns in U.S.-China relations, and about the role of individual leaders against the backdrop of historical circumstance. Ultimately, you’ll learn different ways to study and understand history. We explore this period thematically rather than chronologically, providing you with a better understanding of how political context influences the interpretation of history. Harvard Faculty of Arts & Sciences You may also like - An in-depth look at the 1854 London cholera epidemic in Soho and its importance for the field of epidemiology. - Learn the fascinating history of Igor Stravinsky’s The Rite of Spring, one of the most challenging and rewarding ballets ever...
Year 3 deer CURRENT CURRICULUM THEME: Our big question is: What makes a balanced diet? We are currently exploring what nutrients are needed in our foods to help us grow strong and healthy and how we need a healthy balance of all these important nutrients. We are learning what each nutrient does for us and how we can better our diets to stay happy and healthy. In addition, we will be exploring skeletons in the animal kingdom, as well as detail into human bones and their functions. WE ARE READING: - Charlie and the Chocolate Factory by Roald Dahl - The Great Chocoplot by Chris Callaghan - Numerous non-fiction texts on nutrition and skeletons - Recipe books
BIOS, computing, stands for Basic Input/Output System. The BIOS is a computer program embedded on a chip on a computer's motherboard that recognizes and controls various devices that make up the computer. The purpose of the BIOS is to make sure all the things plugged into the computer can work properly. It brings life to the computer, and the term is a pun on the Greek word Lua error in package.lua at line 80: module 'Module:Language/data/iana scripts' not found., bios meaning "life". "Booting up" is the process that the computer completes to get it ready to use when it is first turned on. When the computer turns on, the BIOS starts up and performs a Power-On Self Test (POST). During the POST, the BIOS will check various devices in the computer like the computer processor, memory, the video card and others to make sure they are present and functioning. Once the POST has completed successfully, the BIOS looks for an operating system to load, which is usually located on the computer's hard drive. When it finds one, it starts to load it. At this point, the operating system takes over control of the system. Some people try to configure their computer to run faster than it was designed to do. This is called "overclocking". Since the BIOS controls the computer processor, entering the BIOS setup screen while it is booting up gives people access to advanced computer settings. For most computers, pressing the delete or F12 button while the computer is booting up will bring up the BIOS setup screen. There are many ways a computer can be overclocked to run faster, but most involve simply turning up the speed of the computer processor. Doing this will usually make your computer run much hotter, though, and can sometimes break your computer, which is why overclocking will usually void your computer's warranty. Only people who are computer hardware experts should attempt to overclock their computers. - "Henry George Liddell, Robert Scott, A Greek-English Lexicon, βίος". www.perseus.tufts.edu. Retrieved 2017-03-10.
Moving Frames of Reference. Copyright ©2004-2014 David V Connell. In these articles, an inertial frame of reference (FoR) is one where no external energy is being supplied to change its speed. A “moving” FoR is one that has been accelerated to a constant speed relative to the home frame by externally applied energy. A FoR attached to an object in free fall in a gravitational field does not qualify as a moving frame for relativistic purposes as its total energy is not changed, but it does qualify as an inertial frame. A FoR attached to a particle on a disc rotating at a constant speed is both inertial and moving. First, a derivation of the equivalence of mass and energy is given, as it is accepted to be true in some of the articles on this website. Consider a photon emitted by a light source. It is moving at light speed c relative to its source and has energy E, but no mass and therefore no momentum, yet, when it strikes mass it exerts a force with the attributes of momentum. It was well known in the late 1800’s that momentum has a quantitative value of its energy divided by its speed. Therefore its apparent (virtual) momentum, when it strikes some mass, is E/c, and this has to be equivalent to the momentum of a mass M travelling at the speed of light, i.e. Mc. Therefore, E/c = Mc, or E=Mc². This means that mass is concentrated energy, and in appropriate circumstances mass and energy can be converted from one to the other. For this discussion we will assume that a material object A is accelerated to some speed relative to an observer B, by applied energy. There are two basic places for observation by an observer, one is where the observer travels with the object A, and he is then said to be in the object's "own" frame of reference (FoR), and the other is in any other FoR, called an "external" FoR herein, one of which becomes the chosen frame (often referred to as the stationary frame) from which the observer B observes the moving object A. The principle of Conservation of Total Energy (which includes mass since it has been shown to be a concentrated form of energy) indicates that when mass is accelerated to a speed by the application of energy, the energy transferred to the object takes the form of kinetic energy (KE) in external FoRs, but, in its own FoR the object always has zero velocity so it cannot have kinetic energy. Therefore, in its own FoR, as it cannot be created nor destroyed, the applied energy (E) must be stored in the object (often called potential energy) as mass. From the mass/energy equation, the stored mass is E/c², where E is also equal to the external KE and the new mass M is equal to the original "rest" mass Mo + KE/c². Thus, observer B measures the speed (V) of A and calculates its kinetic energy from the classic definition, KE = MoV²/2. This assumes that all the applied energy is utilised to obtain speed, so that none of it is diverted to increasing the mass in external FoRs. That is, the maximum unrestricted speed is obtained from the applied energy. From above, the new mass M of object A in its own FoR, is given by M = Mo(1 + V²/2c²) in home frame units (not the same as Einstein's equation, where the maximum speed was restricted to that of light), and the mass of A as observed from external frames remains at Mo. No problem so far? Some people do have a problem with it and assume the mass gain occurs in all FoRs, but what follows is where some, even highly qualified physicists (!), can get it wrong. When changing this example so that an observer at A observes an object in B to be apparently moving, a simple calculation of the total energy of B, being its original rest mass plus its KE, offends the Conservation principle, as it seems to have increased by the amount of the KE. This cannot have occurred as no energy has been applied to object B. This apparent anomaly is solved for this situation as follows. Firstly, consider that the total energy of an object (mass plus any kinetic energy plus any other form of energy associated with it) cannot be different by merely observing it from different frames. Therefore the Total Energy of an object must be the same in all FoRs. From the discussion above, it is known that the mass of object A has been increased by the adsorbed energy, therefore the mass of B is reduced relative to the mass of A, so the reduced mass must be used in the calculation of the total energy of B as measured by A and is therefore mc² + mV²/2, where m is the apparent (reduced) mass. This must be equal to its original total energy, so Moc² = mc² + mV²/2, or Mo/m = 1 + V²/2c², and is the inverse of M/Mo, as one should expect from relativity for this circumstance. Thus, only the mass in an object's own FoR can be real mass, so that any relativistic changes (which are dependent on a change in mass) are independent of the FoR of the observer. Thus, the Doppler Effect for light does not qualify as a relativistic effect, it does not change any property of the source (mass, frequency), it is only an optical effect of the relative velocity of the observer. To end, just a few words on relativistic momentum (MV); all text books assume M is the relativistic increased mass at velovity V. But, for unrestricted motion, M is only increased in its own FoR, and V belongs only to external FoRs, so MV is a mixture of FoRs and cannot be correct. Therefore, only MoV is valid for unrestricted motion. It is shown in Natural Relativity (Section III.B) that mass can increase in external frames if motion is restricted (with no dissipation of energy), but is only ever equal to M when V is zero, and it is then the Static case, such as when energy is added to an object to lift it against the force of gravity and there is no resulting motion (V=0). Click here to go to SR's Problems, or HERE to return to home page, or here to return to CONTENTS.
Want to know how society’s values are replaced over time? Or how its structures change their form? Here’s an interesting article that will bring you to a deeper perspective on the causes of social change. Click below to go to the main reviewer: Table of Contents Factors of Social Change. Any significant changes in the societal and cultural patterns will likely cause social change (alteration of human interactions and social structures). This is a universal phenomenon but cannot be predicted in a definite manner. All changes in society will, in turn, yield consequences—affecting the future trends, the individuals’ lives, and their status quo (current structures, values, and state of a group) either for short or long-term. Social movements and other collective behaviors are acknowledged by sociologists as the top factors in inspiring societal shifts. However, there are also other key factors that play a vital role in prompting social change. One of them is technological innovations. The historical evolution of societies is largely defined and directed by the epoch’s technological advancement. During the pre-industrial society, most of the production from factories and mills were dependent on power from water, wind, man, and even horses. But the introduction of the technology of steam engines greatly changed the way of life of the members of society, paving the way for the rise of the Industrial Revolution. As societies continue to develop technologies, infrastructures, and industries, the modernization process began where it all the more increased the differences among structures and in the amount of work specialization. At present, with the invention of new technologies, we can see the obvious direction of industrial societies into becoming a digital or information society (post-industrial society), which is centered on producing information and services. These advancements brought about by technology benefit many and have made lives easier; but because of the difference between developed and developing countries, the unintended effect of technological innovations is the digital divide. This term is used to describe the increasing gap among regions that have access to communications technology and modern information and those who do not have or little means to acquire it. Another thing that bears an impact on social order is the physical environment. Natural disasters such as earthquakes, hurricanes, volcanic eruptions, tsunamis, and droughts have resulted in social disorder and continue to pose threat to the stability of a society. Climate change has forced several populations to migrate especially those who are leaving near coastlines and waterways as they are more vulnerable to flooding due to rising sea levels and storms. Many groups are advocating for environmental sustainability or the degree to which the quality of human life is improved and its activities are sustained without undermining the earth’s supporting ecosystems. But most of the environmental issues today are also closely linked to economic, cultural, and political factors that historically shaped power relations and the unequal access of regions to resources. Sociologists have identified a pressing concern called environmental racism, where ethnic and racial minority groups and members of the low-socioeconomic class have a disproportionate exposure to hazardous and dangerous items such as toxic waste facilities, dust, and other sources of environmental pollution. In effect, these minorities have recorded higher rates of chemical poisoning, cancer, and even birth defects. The population has profound implications on social change as well. The population composition (composed of fertility, mortality, and migration rates) and its size will most likely cause changes in the way many of its social institutions deliver services. In fact, it will be a determining factor as to how social institutions should be organized in order to continuously address the needs and concerns of its population. For instance, if an area’s fertility rate (number of born children) is high, economic growth is hampered and this is reflective of the lack of access to birth control and education. Usually, the fertility rate is always lower than the fecundity number or the measured number of offspring that women of childbearing age could give birth to. In contrast, a high mortality rate (the measure of the frequency of death’s occurrence in a given population within a particular time interval) might indicate poorer standards of living and unavailability of quality health services in the region. Another factor in the population that is relevant to society’s order is migration or the people’s voluntary or forced movement into and out of a specific area. This movement can either be in the form of immigration or emigration. Immigration is the movement of permanent residence into a new area; whereas emigration means leaving an area to reside in another place. The difference is in the action; to immigrate is to enter another country or place where you’ll establish your permanent residence whereas to emigrate is to leave a country or place to permanently settle in a foreign country. Any substantial changes in the population will create a domino effect on society’s structures and institutions. Last in this list are social institutions (mechanisms of social order with the goal of addressing the needs of society). When an area in a single institution shifts, all social institutions will bear the impact since they are interconnected. Family, religion, and school, for example, play big parts in molding the cultural values and behaviors of individuals through socialization (a lifelong process of internalizing society’s values, norms, and beliefs, which help an individual adapt and adjust.) But a change in the dominant message propagated by the media on its various platforms will likely influence what society might consider as normal and acceptable. Moreover, studies find that many of the social ills can be traced back to the collapse of the family—the most essential structure of society. All of the factors mentioned above are interconnected. Because of this relationship, it will take an in-depth study to identify what is the primary cause of a change in society as just one substantial movement might trigger the other aspects of change as well. Theories of Social Change. In order to understand and explain social change, sociologists make use of the three main theories called evolutionary, functionalist, and conflict theories. Evolutionary theory is inspired by the work of Charles Darwin and his theory on biological evolution. Its view states that all societies move in specific directions—progressing continuously. This state of a higher level is achieved using scientific methods, as August Comte assumed. Together with him is Emile Durkheim and Herbert Spencer, who believed that all societies go through the almost similar order of stages of evolution toward a common end, from having simple to more advanced and complex social structures. These comprise the unilinear evolutionary theories. On the other hand, an American sociologist laid on the table a different view called a multilinear evolutionary theory. Gerhard Lenski Jr., (1924-2015) proposed that societies undergo through different lines and that change does not necessarily steer in the same direction. Leading functionalist sociologist, Talcott Parsons (1902-1979), emphasizes that society’s natural state is balance and stability. In his equilibrium theory, changes that stem from factors like population and technology will threaten the social order, unless other aspects of society will make appropriate adjustments. Equilibrium will be disrupted temporarily, but continuing progress will occur once society moves toward finding balance. If the equilibrium theory thinks of social change as a disruption to society’s homeostasis, conflict theory sees it as desirable and necessary. Karl Marx noted that societies do not improve and progress over time, but argued that each stage proceeds to harsher exploitation of the poor and favorable situation for those in power. From its basic assumption that social structures contribute and help maintain inequalities, social change, particularly the socialist revolution, is the way to emancipate people and regain freedom. Cliffsnotes. (n.d.). Models of Social Change. Retrieved from https://www.cliffsnotes.com/study-guides/sociology/social-change-and-movements/models-of-social-change Little, W. (2014). Introduction to Sociology – 1st Canadian Edition. Victoria, B.C.: BCcampus. Retrieved from https://opentextbc.ca/i Lumen. (n.d.). Social change. Retrieved from https://courses.lumenlearning.com/sociology/chapter/social-change/ Next topic: Important Figures in Sociology Previous topic: Social Change and Current Trends Return to the main article: The Ultimate Social Science Reviewer
On March 2021 the James Webb Telescope will be put in orbit to become the successor of the Hubble Space Telescope which has been operating in low orbit since 1990. The JWT has been in development for more than 20 years and will allow us to better understand the origins of the universe, the formation of stars and galaxies, and hopefully, even get direct images of exoplanets. But why do astronomers put telescopes in space when the ones on the ground seem to be working well enough? There are five main reasons why putting a telescope in space has an advantage over one in the ground. - No light pollution - No night-time only limit - Better resolution - Greater wavelength spectrum - No bad weather Let’s take a look at each of them individually. Advantages of Space Telescopes 1. No light pollution Light pollution is the effect city lights have on the sky, making it brighter and therefore making it more difficult to look at the stars. This is why in big cities only a few of the brightest stars can be seen in the night sky. It is also why advanced modern observatories are built in remote areas away from this pollution. Putting a telescope in space solves this problem as there are no artificial lights to mess with the receiving mirrors. 2. No night-time limit The light from the Sun is so bright it limits our telescope time on the ground to nights only as it doesn’t allow us to look at the stars during the day. Because there is no night or day in space, you can use a telescope all the time, almost tripling its effective time. This helps astronomers study a lot more of the universe. But wait. Wouldn’t sunlight be brighter in space as there is no atmosphere? Doesn’t that affect space telescopes?. Well, not really. You can simply plan for that and point the telescope the other way. Because light needs to bounce off something to be captured, pollution is not a problem. If there’s something you want to study that is behind the Sun, you can wait a few months until Earths orbit puts it on the other side and then take a look at it. 3. Better resolution As you will see through this little list, Earth’s atmosphere is a big problem for looking at the stars. One of the many issues it creates is that it slightly distorts the light that passes through and scattering some of it. This means the images we receive have a lower resolution and are distorted. When you are dealing with objects very far away and the light that reaches us is already dim, this limits our ability to look at some areas of the sky with precision. Some fancy new techniques exist to solve the resolution problem like using lasers and supercomputers to process and sharpen the images. However, the most effective way to bypass the problem is to simply pot the telescope beyond the atmosphere. 4. Greater wavelength spectrum You might remember from your early science classes that Earth’s atmosphere bounces off or absorbs certain types of radiation. Specifically, certain types of wavelengths in the ultraviolet and infrared spectrum. This works out great for us humans as these types of radiation are deadly to us, but it sucks for astronomy purposes as it means we only get a small amount of information. Scientists can use x-ray, gamma-ray and other types of UV radiation to determine with more precision the temperature and composition of stars and planets. Because this information doesn’t reach us down here in the ground, sending a telescope to orbit where it can be processed is really useful. 5. No bad weather I can’t tell you how many hobbyist astronomer’s nights have been ruined by a cloudy, foggy and/or rainy night. Just like night-time, weather reduces the time window we have for observing the sky. No clouds in space means you can use the telescope all the time. The disadvantages of space telescopes On the other side of the coin, there are also some disadvantages to putting a telescope in orbit. If putting one up there was an easy task, there would be no reason for us to keep building big observatories down here. Let’s take a look at some of the most negative aspects of space scopes. This is the big one really. Building a 44 feet telescope with perfectly aligned mirrors, ultraviolet sensors and a bunch of cameras that can withstand the hostile environment of space is expensive. And that is just the beginning. You still have to put over 11 tons in a rocket and launch it into space. After that, you have to send regular maintenance missions if something goes wrong. The cost of putting the Hubble Telescope in space was $4.7 billion. In 2010 an estimate of the cumulative costs including maintenance had the total cost of the Hubble at around $10 billion. If a critical instrument in the telescope breaks or malfunctions for any reason, replacing it is no easy task. You have to send trained astronauts up there to repair it and hope you actually know what the problem is. If you misdiagnose it on Earth and the problem is completely different, you would be wasting an entire mission. The Hubble has undergone at least five major servicing mission in which its mirrors, gyroscopes, solar panels, cameras, and other instruments have been replaced. But this is expensive and very work-intensive. At some point during its early years, NASA even considered just abandoning the project after they discovered one of the mirrors was not polished correctly and was sending blurry images. In fact, at that time, the Hubble was considered a failure and waste of money. It wasn’t until the first service mission in 1993 that it started working as intended. How many telescopes are there in space? There are currently around 25 active telescopes. Most of them are on Earth’s orbit with only a few exceptions like the Spitzer Space Telescope which was sent to orbit around the Sun in 2003. Most space telescopes are also built for specialized frequencies like Gamma, X-ray or Infrared. Only a few like the Hubble are focused on the visible light spectrum. You can find a list of all active and defunct space telescopes in this link. Some really cool space telescopes are planned to be launched within the next few years. For example, a private initiative, the International Lunar Observatory Association hopes to put a small telescope (ILO-1) on the south pole of the Moon as an initial test for a permanent Moon observatory. As mentioned in the introduction, the James Webb telescope will also be launched in 2021 (if there are no additional delays) and it will become a more advanced version of the Hubble.
Measuring the irregularities of the Earth's rotation The variability of the earth-rotation vector relative to the body of the planet or in inertial space is caused by the gravitational torque exerted by the Moon, Sun and planets, displacements of matter in different parts of the planet and other excitation mechanisms. The observed oscillations can be interpreted in terms of mantle elasticity, earth flattening, structure and properties of the core-mantle boundary, rheology of the core, underground water, oceanic variability, and atmospheric variability on time scales of weather or climate. The understanding of the coupling between the various layers of our planet is also a key aspect of this research. Several space geodesy techniques contribute to the permanent monitoring of the earth's rotation by IERS. For all these techniques, the IERS applications are only one part of their contribution to the study of planet earth and of the rest of the universe. The measurements of the earth's rotation are under the form of time series of the so-called Earth Orientation Parameters (EOP). Universal time (UT1), polar motion and the celestial motion of the pole (precession/nutation) are determined by VLBI. The satellite-geodesy techniques, GPS, SLR and DORIS, determine polar motion and the rapid variations of universal time. The satellite-geodesy programs used in the IERS give access to the time variations of the earth's gravity field, reflecting the evolution of the earth's shape, as well as the redistribution of masses in the planet. They have also detected changes in the location of the centre of mass of the earth relative to the crust. This makes it possible to investigate global phenomena such as mass redistributions in the atmosphere, oceans and solid earth. Universal time and polar motion are available daily with an accuracy of 0.5 mas and celestial pole motion are available every five to seven days at the same level of accuracy - this estimation of accuracy includes both short term and long term noise. Sub-daily variations in Universal time and polar motion are also measured on a campaign basis. Past data, going back to the 17th century in some cases, are also available.
Children may feel confused and scared over the recent invasion of Ukraine by Russia. Such feelings are normal but providing support and the right kind of information can help our children to feel more secure. According to Christopher Lynch, PhD, psychologist and director of Pediatric Behavioral Medicine for Goryeb Children’s Hospital, “Children benefit from honest explanations about what is happening but those explanations must be tailored to the age and developmental level of the child”. Dr. Lynch offers some general guidelines to follow when talking with your children about the ongoing invasion: - Be honest with the facts of the situation. Children can often tell when parents are leaving out or glossing over important details. If you do not provide them with the facts, they may imagine the situation inaccurately and think that their own safety is in jeopardy. - Provide information at an age-appropriate level. Be honest with your children but use words and concepts that your children can understand. Asking your child to repeat back what they heard you say can help identify any need for clarification. - Reassure your child that they are safe. Children need to know that the adults in their world are in control and know what to do to keep them safe. In the case of this conflict, children may need to understand that the invasion is far away and that they are well protected from it. - Act to help in any way that you can. Children can feel more control over a situation if they can help in some way. Your child may want to donate items, or part of their allowance, maybe they want to make up a card or banner for Ukrainian children. Any of these gestures will teach your children compassion and help them to make a difference. - Monitor media exposure and limit when necessary. There are many disturbing images that are being displayed through television and other forms of media. These images may be too disturbing for children to process. Find out from your children where they are getting their information from so that you can clarify or limit when necessary. Everyone, including children, have at least some awareness of what is happening. Dr. Lynch says parents can proactively start a dialogue with their children to assess their thoughts and feelings on the topic. “Talking to your children about the invasion will show them that it is OK to talk about difficult feelings and that we are there to help them.”
Turbidity Sensor (Water Suspended Particles) Turbidity is a measure for impurity (change water color to be non-transparent) caused by particles suspended or dissolved in water making it appear opaque or cloudy. The suspended particles clouding the water may be due to such substances as clay, rock flour, silt, calcium carbonate, silica, iron, manganese, sulfur, solid contamination, corrosion or industrial wastes. The turbidity sensor detects water quality by measuring the levels of turbidity, or the opaqueness (non-transparency). It uses light to detect suspended particles in water by measuring the light transmittance and scattering rate, which changes with the amount of total suspended solid particles (TSS) in water. As the TSS increases, the liquid turbidity level increases. This turbidity sensor has both analog and digital signal output modes. In analog mode, the voltage output cover range from 0 to 4550 NTU (turbidity measuring unit). While in digital mode (selected by sliding switch) the sensor indicate either high or low level of turbidity. Turbidity sensors are used to measure water quality in rivers and streams, wastewater, pipe transition, agricultural research and laboratory measurements. - Operating Voltage: DC 5V - Operating Current: 30mA (MAX) - Detection Range: 0%--3.5%(0-4550NTU) - Operating Temperature: -30℃~80℃ - Error Range: ±0.5% - Output mode 1: Analog output 0-4.5V - Output mode 2: high / low level signal
You will need the speed challenge sheet, a pencil and a 1-minute timer. Complete the next set of 10 calculations. Did you do better than yesterday? You will need Multiplication and Division powerpoint, multiplication and division word problems and a pencil. Talk through the powerpoint with your child. Give your child the worksheet. Reinforce the concept of the problem-solving hand – 5 steps, one for each finger on a hand, we use in school to help solve word problems. You may need to read the question to your child, then encourage them to complete steps 2 to 5 themselves. Please send the completed sheet to your class email. You will need: Your storyboard, a phone One of our targets in English is to read out loud with expression and intonation. Intonation is the rise and fall of your voice when reading and Expression is the way you convey a feeling or emotion when reading. When you use intonation and expression you make the story come alive by using your voice. For example: When using expression if in the story there is a character that is sad you might read their part in a sad voice, if there is a character that is happy you would use a happy voice when they are speaking. If using intonation you might use your voice to convey the story plot maybe lowering your voice for quiet or scary sections or using an amazed voice if something unusual happens. Today, you will be reading your story out loud, using both expression and intonation. Remember to practise lots of times so that you become fluent and you know exactly what you are reading. Ask your adult to send a video of your story reading to the class email. English – Reading Independently read your reading book or a different story for 20 minutes on your own. You will need: laptop/tablet/phone and the habitats worksheet Living things and their habitats With an adult, recap the information PowerPoint about habitats. We then challenge you to research about an ocean habitat, using the link here: You are not restricted to this link if you want to do some additional research with an adult. Then you will complete the factfile information sheet on that animal that lives in the sea/ocean. Try to ensure you are drawing scientific diagrams (that means drawing what they really look like!)
Combustible gases and vapors are the ones that inflame when get to the air and reach the certain concentration. They may have different Lower Explosive Limit, or Lower Flammable Limit, which are the lowest amount of the gas which can keep burning. All combustibles pose a great danger on industrial facilities, oil and gas production, healthcare institutions and many other sites and fields. Hence, the combustible gas detectors are in a great demand among various companies and businesses. Definitions and classifications First of all, we have to give the detailed classification of the dangerous flammable or explosive gases and vapors. They may be divided into several families: - alcohols, the most common are methanol and ethanol; - inorganic compounds, such as hydrogen and hydrogen sulfide, carbon monoxide, etc.; - hydrocarbons, like gases methane, propane, butane, acetylene and many others; - esters and ethers; - cyclic compounds and ketones. The lowest LEL/LFL belongs to the hydrocarbon gas turpentine. Only less than one percent of this gas in the ambient air is enough to start a fire. The least dangerous gas is inorganic ammonia, NH3, its LEL is 15%. The combustible gases can be lighter or heavier than the air, and when working on safety and security of your home or working place, you have to remember that it is important to sample the air on different levels. Variants of detection There are two main technologies of combustible gas detection. The Catalytic Bead Sensor uses a passive principle of action, it has two platinum coils with alumina beads that are processed differently — one of them suppresses the oxidation of gases, and another one has the oxidative qualities. When the electrical current goes through the coils, the bead that oxidizes the gas is heated, the resistance changes, and it is a signal that the analyst is found and can be measured by the rates of the difference of the temperatures of the coins and beads. The first and the biggest problem with such detectors is that they are very amenable to contamination and poisoning, and the sensors may stop working due to aging. The dust, particulate matters, minerals in the water vaporize, greases and oil can subside on the active bead and coin and change its performance, or simply stop its work. Infrared gas detectors have the source of infrared light and the receiver that measures the intensity of the light on different wavelengths. The presence and the amount of the monitored gases is detected and measured by the difference of this intensity. The advantages of such detectors are numerous, they are resistant to contamination and poisoning, it can work on continuous basis with hazardous gases, it has the self-check and calibration function. In the same time, the catalytic detectors can work in heavy humidity or dusty air, when the light cannot get through, they can detect H2, which is impossible with the IR sensor, and most of hydrocarbons. Usually, modern multi gas analyzers include one of the sensors, thus they have the function of the combustible gas detection. RAE Systems MultiGas, MultiRAE families, Photoionization detectors provide the fire protection for the personnel in various working places. They all are relialbe, precision and durable. And their small size, portability and handy design make them highly useful in various situations.
By Lauren Musu-Gillette and James Deaton In order to measure the progress of education in the United States, it is important to examine equity and growth for students from many different demographic groups. The educational experiences of American Indian and Alaska Native (AI/AN) youth are of particular interest to educators and policymakers because of the prevalence of academic risk factors for this group. For example, the percentage of students served under the Individuals with Disabilities Education Act (IDEA) in 2013-14 was highest for AI/AN students, and in 2013 a higher percentage of American Indian/Alaska Native 8th-grade students than of Hispanic, White, or Asian 8th-grade students were absent more than 10 days in the last month. Although NCES attempts to collect data from AI/AN students in all of our surveys, disaggregated data for this group are sometimes not reportable due to their relatively small population size. Therefore, data collections that specifically target this group of students can be particularly valuable in ensuring the educational research and policy community has the information they need. The National Indian Education Survey is one of the primary resources for data on AI/AN youth. The National Indian Education Study (NIES) is administered as part of the National Assessment of Educational Progress (NAEP) to allow more in-depth reporting on the achievement and experiences of AI/AN students in grade 4 and 8. NIES provides data at the national level and for select states with relatively high percentages of American Indians and/or Alaska Natives. It also provides data by the concentration of AI/AN students attending schools in three mutually exclusive categories: Low density public schools (less than 25 percent AI/AN); High density public schools (more than 25 percent AI/AN); and Bureau of Indian Education (BIE) schools. In a recently released report on the results of the 2015 NIES, differences in performance on the reading and mathematics assessments emerged across school type. In 2015, students in low density public schools had higher scores in both subjects than those in high density public or BIE schools, and scores for students in high density public schools were higher than for those in BIE schools. Additionally, there were some score differences over time. For example, at grade 8, average reading scores in 2015 for students in BIE schools were higher than scores in 2009 and 2007, but were not significantly different from scores in 2011 and 2005 (Figure 2). * Significantly different (p < .05) from 2015. NOTE: AI/AN = American Indian/Alaska Native. BIE = Bureau of Indian Education. School density indicates the proportion of AI/AN students enrolled. Low density public schools have less than 25 percent AI/AN students. High density public schools have 25 percent or more. All AI/AN students (public) includes only students in public and BIE schools. Performance results are not available for BIE schools at fourth grade in 2015 because school participation rates did not meet the 70 percent criteria. SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), various years, 2005-15 National Indian Education Studies. The characteristics of students attending low density, high density, and BIE schools differed at both grades. For example, BIE schools had a significantly higher percentage of students who were English language learners (ELL) and eligible for the National School Lunch Program (NSLP). Additionally, high density schools had a significantly higher percentage of ELL students and NSLP-eligible students than low density schools. The report also explored to what extent AI/AN culture and language are part of the school curricula. AI/AN students in grades 4 and 8 reported that family members taught them the most about Native traditions. Differences by school type and density were observed in responses to other questions about the knowledge AI/AN students had of their family’s Native culture, the role AI/AN languages played in their lives, and their involvement in Native cultural ceremonies and gatherings in the community. For example, 28 percent of 4th-grade students in BIE schools reported they knew “a lot” about the history, traditions, or arts and crafts of their tribe compared to 22 percent of their AI/AN peers in high density schools, and 18 percent of those in low density schools. Similarly, 52 percent of 8th-grade students at BIE schools participated several times a year in ceremonies and gatherings of their AI/AN tribe or group, compared to 28 percent of their peers at high density public schools, and 20 percent of their peers at low density public schools. If you’re interested in learning more about NIES, including what the study means for American Indian and Alaska Native students and communities, you can view the video below. Access the compete report and find out more about the study here: https://nces.ed.gov/nationsreportcard/nies/ See https://nces.ed.gov/programs/coe/indicator_cgg.asp See https://nces.ed.gov/programs/raceindicators/indicator_rcc.asp American Indian and Alaska Native state-specific 2015 NIES results are available for the following 14 states: Alaska, Arizona, Minnesota, Montana, New Mexico, North Carolina, North Dakota, Oklahoma, Oregon, South Dakota, Utah, Washington, Wisconsin, and Wyoming. Less than 25 percent of the student body is American Indian or Alaska Native. In low density schools, AI/AN students represented 1 percent of the students at grades 4 and 8. 25 percent or more of the student body is American Indian or Alaska Native. In high density schools, 53 percent of 4th-graders and 54 percent of 8th-graders were AI/AN students. In BIE schools, 97 percent of 4th-graders and 99 percent of 8th-graders were AI/AN students.
First Nations Tribes have lived and hunted in the Black Hills for millennia. After conquering the Cheyenne in 1776, the Lakota Tribe took over the territory of the Black Hills (pictured here), which became central to their culture. In 1868, the U.S. government signed the Fort Laramie Treaty of 1868, establishing the Great Sioux Reservation west of the Missouri River, and exempting the Black Hills from all white settlement forever. These are the sacred Black Hills of South Dakota, a land revered as sacred by the First Nations Lakota Tribes. The area has 3 original Tribal names: the Lakota call them: Ȟe Sápa, the Cheyenne know them as: Moʼȯhta-voʼhonáaeva the Hidatsa call it: Awaxaawi shiibisha. Unfortunately, when settlers discovered gold there in 1874, as a result of George Armstrong Custer’s Black Hills Expedition, miners swept into the area in a gold rush. The US government took back the Black Hills and in 1889 reassigned the Lakota, against their wishes, to five smaller reservations in western South Dakota, selling off 9 million acres of their former land.
Archaea and bacteria are generally single-celled organisms which represent two of the three domains of life. Although these two domains are now recognized as independent, this was not always the case. Previously these groups were referred to as Eubacteria and Archaebacteria. It was in the 1970s that archaea were discovered as a completely new group of organisms1. The term Archaebacteria became inappropriate to use once it was discovered that archaea are actually quite different genetically and in terms of their biochemistry, from bacteria. Although, under the microscope these two types of organisms look to be rather similar. Archaea are really unique organisms which are capable of living in very extreme environments, such as deep-sea vents, hot springs and very acidic waters. Similar to bacteria, they have been found in the digestive tracts of animals such as cows and marine life. More recently, these organisms have been found to also inhabit less extreme environments, such as in the plankton of the open sea1. Bacteria represent a large group of prokaryotic organisms that are thought to be one of the earliest life forms. Although bacteria are not always thought of in a positive manner since they have been the cause of many diseases, they do have many applications. For example, some bacteria are capable of producing antibiotics; they can live in symbiosis with other eukaryotic organisms; and are used in the production of dairy products. Some characteristics of bacteria are that they are extremely small in size and can reproduce rapidly. Archaea and bacteria are both very unique prokaryotes which are rather different from each other despite what was once believed. Further research in this area of biology is necessary to study both of these organisms and uncover the true lineages of the Archaean domain of life. Title Image Credit: flickr.com© BrainMass Inc. brainmass.com September 15, 2019, 3:39 am ad1c9bdddf
The world is exploring fossil alternative greener technologies to fuel powered cars and trucks- for example electric battery powered vehicles. Another ‘green’ technology with great potential is hydrogen power. However, a major obstacle has been the size, complexity, and expense of the fuel systems. Now an international team of researchers, led by Professor David Antonelli Chair in Physical Chemistry at Lancaster University, has discovered a new material made from manganese hydride that offers a solution. The new material would be used to make molecular sieves within fuel tanks – which store the hydrogen and work alongside fuel cells in a hydrogen powered ‘system’. The material, called KMH-1 (Kubas Manganese Hydride-1), would enable the design of tanks that are far smaller, cheaper, more convenient and energy dense than existing hydrogen fuel technologies, and significantly out-perform battery-powered vehicles. Professor Antonelli, who has been researching this area for more than 15 years, said: “The cost of manufacturing our material is so low, and the energy density it can store is so much higher than a lithium ion battery, that we could see hydrogen fuel cell systems that cost five times less than lithium ion batteries as well as providing a much longer range – potentially enabling journeys up to around four or five times longer between fill-ups.” The material takes advantage of a chemical process called Kubas binding which enables the storage of hydrogen by distancing the hydrogen atoms within a H2 molecule and works at room temperature. This eliminates the need to split, and bind, the bonds between atoms, processes that require high energies and extremes of temperature and need complex equipment to deliver. The KMH-1 material also absorbs and stores any excess energy so external heat and cooling is not needed which eliminates the requirement of cooling and heating equipment in vehicles, resulting in systems with the potential to be far more efficient than existing designs. The researchers’ experiments show that the material could enable the storing four times as much hydrogen in the same volume as existing hydrogen fuel technologies. This will provide vehicle manufactures with the flexibility to design vehicles with increased range of up to four times, or allow them to reduce the size of the tanks by up to a factor of four. Although vehicles, including cars and heavy goods vehicles, are the most obvious application, the researchers believe there are many other applications for KMH-1. According to Professor Antonelli, “This material can also be used in portable devices such as drones or within mobile chargers so people could go on week-long camping trips without having to recharge their devices. The real advantage this brings is in situations where you anticipate being off grid for long periods of time, such as long haul truck journeys, drones, and robotics. It could also be used to run a house or a remote neighbourhood off a fuel cell.” The properties of our new material also make hydrogen fuel cells an attractive alternative to lithium batteries in some applications, especially those involving long ranges. The technology has already been licensed by the University of South Wales to a spin-out company called Kubagen, partly owned by Professor Antonelli. We are sorry that this post was not useful for you! Let us improve this post! Tell us how we can improve this post?
It’s been a while since I read that article in Education by Numbers that stated: - mindlessly reading book after book does not benefit children very much - it is comprehension that is the key to learning gains In fact, a voracious reader who understands less than 65% of the content in the books he reads is no better off than a student who does not read much at all. Back then, I didn’t know what else to do except work on comprehension. Now, there is a cool new programme that makes it all so much easier – Accelerated Reader. What is Accelerated Reader? Accelerated Reader is an online reading programme that helps children improve their reading skills. Here’s what it does: - helps children choose suitable books based on their reading level - encourages children to read independently - quizzes children on what they have read The quizzes test both comprehension and vocabulary so we know how much of the book they understood and if they know the meaning of the more challenging words in a particular book. How does Accelerated Reader work? The kids start with an online adaptive test to determine their current reading level. This is called the Star Reading test. This test provides a reading level range called a ZPD, or Zone of Proximal Development. ZPD is a concept developed by Lev Vygotsky (psychologist) and refers to the difference between what a learner can do without help and what he or she can achieve with guidance and encouragement from a skilled partner. – Simply Psychology Children are encouraged to read books within their ZPD range because these books provide the appropriate level of stretch and challenge. These books are more challenging than what the children can read easily, but not so hard that they become discouraged. After reading a book, the kids can complete a short comprehension quiz to see how much of the book they understood. Based on how many questions they answered correctly, they will be given a percentage. The goal is to score 85% and above. Some books will also be accompanied by a vocabulary quiz which will check your child’s understanding of the more challenging words from the book. Teachers can a reading target for the kids to achieve within a certain period of time. Each book they read (depending on the length and reading level) will add points towards their reading target. This motivates the kids to read more to hit their targets. What Books are Available on Accelerated Reader? The best thing about Accelerated Reader is that it uses books that are already readily available. You don’t have to purchase a special series of books and the kids are free to read many of their favourite series. The range of books available on Accelerated Reader is quite extensive. To find out if a particular book is on the programme, you can look it up using arbookfind. After each period, the children can take the Star Assessment again to see if their reading level has improved.
PhET Simulation: Nuclear Fission Journal Entry # 33 After completing this activity, you will be able to explain the concept of a “chain reaction” and how it applies to nuclear fission of Uranium. You will also be able to explain the purpose of the control rods in a nuclear reactor. Click to run the simulation. Fission - One Nucleus: Select the “Fission: One Nucleus” tab at the top left. Experiment with shooting the neutron gun and watching what happens. Then answer the questions below. 1) What happens when the U-235 nucleus is “hit” with a neutron? There are two steps here- describe them both in as much detail as you can. Select the “Chain Reaction” tab at the top. Experiment with changing the settings and shooting the neutron gun and watch what happens. Then answer the questions below. 2) Set the initial number of U-235 nuclei to 100. What happens when you fire the 3) Explain what makes this a “chain reaction”. 4) Set the initial number of U-238 nuclei to 100. Explain what happens when you fire the gun and if this is a chain reaction or not. 5) Set the initial numbers of U-235 nuclei and U-238 nuclei to the numbers in the table below. Record your results. U-235 100 70 50 30 0 U-238 0 30 50 70 100 % of U-235 fissioned after 1 firing # firings required to fission all U-235 N/A 6) What happens to the reaction as the proportion of U-238 nuclei increases? 7) If you were trying to design the most efficient fission reactor possible, what ratio of U-235 to U-238 would you want? Explain why. Select the “Nuclear Reactor” tab at the top. Experiment with changing the settings and firing the neutrons and watch what happens. Then answer the questions below. 8) The bar graphs on the right of the display show the “Power Output” and the “Energy Produced”. What is the difference between these two quantities? 9) Watch very closely to the fission reactions as they happen. Specifically watch happens to the loose neutrons after the reaction. a) What happens if the neutrons hit another nucleus? b) What happens if the neutrons hit a control rod? 10) Compare the chain reaction that occurs when the control rods are inserted further into the reactor versus when they are pulled all/mostly out of the reactor. 11) If the purpose of a nuclear reactor in a power plant is to produce energy, why are there control rods? Why should we learn about nuclear energy? How does this affect our lives? Watch the videos below. Activity by K. Gates from phet.colorado.edu/simulations.
When parenting, consequences are used to support children to learn to make the right choices, the choices that will lead them to more positive responses. At times parents can forget this and start to use consequences in their parenting as punishment with little learning. Consequences, if used well, will support a child’s learning and help them become responsible adults, making good choices most of the time, none of us are perfect! These following tips may support you to implement consequences wisely: - Never put a negative consequence in place for your child’s behaviour until you have ensured all the positive parenting behaviours are in place. These consist of praise and encouragement; positive language; assertive listening; clear and direct communication; positive reward focusing on what the child is doing well more so than on what the child is not achieving. If you feel you have mastered all these techniques and put them into practice every day then you can explore consequences as the next step to take. - The greatest mistake parents make when using consequences is that they make them too big. They often cannot follow through on them. This tells you the consequence was not planned so therefore was not effective. - All consequences must be planned. Children must be told in advance of the behaviour that is not acceptable and that when they choose to behave in such a way there will be a consequence. Talk with your child about what the fair consequence will be, Agree on it with them. - Children have to choose and not be forced out of fear to make the right choice. When children do make good choices in their behaviour it is crucial to praise them for that. Never miss praise worthy action, they are small and meaningful at the time. - Never shift the boundaries of what is agreed. If you decide on the spur of the moment on a new harsher consequence then you have lost the battle immediately and what your child will learn is that adults are dishonest and do not follow through. If your child feels they are being treated unfairly then they will not choose to behave, they will most likely choose to act in such a way as to let you know how unfairly they feel treated. - Choose reasonably small consequences to start with. Take away an activity from your child such as their favourite toy for one hour- if you start with the biggest consequence then you have no room to move. Firstly remove one small but meaningful privilege and then if your child continues to choose to misbehave then remove another. - Do not be ruled by your mood. Parents can often let go of behaviours when they are in good form and implement harsh consequences when in poor form. Really this is not teaching children anything other than adults have power and can use in whatever way they choose. This is not the message you wish to send to your child. If you over punish then you need to step back, apologise to your child and start over with them. Sit your child down and tell them what the issue is. Hear from your child what the challenge is for them. Then make and agree a plan. - For children under three years of age it is much more appropriate to ensure you are parenting with positive rewards. Young children will not understand the consequences and will most likely just be left feeling hurt and scared. Children have to be old enough to reason with. - Put consequences in place for you as a parent also. We are our children’s most effective role model. If you are not modelling the correct behaviours then talk with your child about this also. No double standards for parenting. You also need to choose how to behave. Often when we parent we misbehave in ways we never would in the work place. Give your children some power to help you recognise the negative choices you make and tell them what you are going to do about it. - Team work is what families are about. Talk with each other, understand each other’s needs and work together to formulate new plans and new ways to live and learn from each other. This ’10 Ways to’ article is by One Family’s Director of Children & Parenting Services, Geraldine Kelly, as part of our weekly ’10 Ways to’ series of parenting tips. You can read the full series here. Join Geraldine on Facebook on this and other parenting topics for a weekly Q&A live in our One Family Parenting Group which is a closed Facebook group (meaning that only members can read posts) that anyone can join. Post your questions and share your experiences. Find out more about our parenting skills programmes and parent supports. For support and information on these or any related topics, call askonefamily on lo-call 1890 66 22 12 or email [email protected].
Why learn about vectors? In order to do most of the physics in Physics I and Physics II, you need to have a good grasp of the basics of vectors. Combining them isn’t terribly difficult, but takes a bit more than just adding them algebraically. As a result of working through this course, you will learn how and when to use vectors. In order to study motion in more than one dimension, we generally break the position, velocity, and acceleration into pieces called components. These components point along our coordinate system. If you can find the sides of a triangle, you know how to find components. While finding components initially seems like a chore, it actually helps simplify the situation. The horizontal pieces and the vertical pieces can be treated separately, and then combined together at the end of the problem. This is a big help in problems involving two dimensional motion. You can treat a projectile problem relatively easily by first finding the components of the launch velocity. Let’s learn how to describe these things, break them into pieces, and put them back together again. After completing this course, you will be able… - To describe what a vector is. - To define the terms vector, magnitude, component. - To add two vectors using the head-to-tail method. - To add two vectors using the parallelogram method. - To multiply a vector by a scalar. - To break a vector into components. - To find the magnitude and direction of a vector when given its components. - To add two or more vectors using components.
How to Keep Your Brain Healthy? The brain is the most important and most complex part of the body. Without our brains, we could not stand, walk, talk, eat, breathe, think or sleep. It controls our entire body and all that we do. Good brain health is important for anyone at any age. A healthy brain is your greatest asset, but it is often taken for granted until it has problems. Maintaining good brain health is a lifelong process, and it is never too late, or too early, to start. Good nutrition is the foundation of good brain health. Inadequate nutrition can drastically affect brain function resulting in short-term memory loss, poor concentration, and attention deficit. Make sure to eat a healthy and well balanced diet. Your diet should be rich in omega-3 fatty acids, antioxidants, vitamins and minerals. Eating a diet rich in antioxidants and omega-3 fats nourishes the brain and prevents cognitive decline as you age. Exercise Your Brain Just like your muscles, you need to use your brain or you lose it. Brain is the most adaptable and modifiable organ in your body. Learning and experiencing new things is a great way to challenge your brain. Studies have shown that learning a new language can improve brain function and protect against Alzheimer's disease. Crosswords, puzzles and board games can give your brain a good workout. Remember, you need to constantly challenge your brain to keep it agile. Stay Physically Active Regular physical activity is a great way to keep your brain healthy. Studies show that even 30 minutes of brisk walking daily can improve blood flow to the brain. The brain needs a good blood flow to deliver vital nutrients and oxygen and take away waste products. It has been found that exercise stimulates the formation of new brain cells associated with memory and learning. Try to exercise at least 30 minutes a day on a regular basis. Prolonged exposure to high levels of stress can damage the brain. Chronic stress increases the stress hormone cortisol, which affects many brain functions. Stressful life events could harm your brain's memory and learning capacity by reducing the volume of gray matter in brain. In fact, chronic stress is even associated with the development of Alzheimer's disease. You can manage stress by using relaxation techniques such as yoga or meditation. Regular, moderate exercise is another effective way to reduce stress. - Learn new skills and engage in new experiences. - Stay socially engaged with friends, family, and community groups. Active social engagement plays an important role in brain health. - Make sure you are getting a daily dose of healthy fats like fatty fish, olive oil, avocado, raw nuts and seeds. - Avoid smoking and excessive alcohol consumption. - Try to keep a healthy body weight. - Exercise your brain by challenging it with new activities. - Make sure to get adequate sleep every night. Sleep has a huge influence on brain health. - Protect your head against injury. - Try to think positively and avoid stress. - Find time to laugh and have fun with your friends and family.
In the above paper from our Pediatric Retinal Research Lab, we have explored the effects of VEGF-trap on a mouse model used to understand retinal neovascular growth in the premature retina. Many of the most blinding retinal disease in people involve situations that create oxygen starvation in the retina. Once this happens, not only do neural cells of the “Neural Retina” perish, but growth factor concentrations are changed in an attempt to get oxygen to these areas of the retina by triggering growth of more blood vessels into these areas. Elevation of several growth factors are likely contributing to the growth of new vessels, including VEGF or “Vascular Endothelial Growth Factor”. The cells that form the tubular lumen of a blood vessel are called endothelial cells. These cells line the inside of the vessel, and they can respond to VEGF through cell surface receptor proteins (VEGF receptors) that bind the VEGF produced and excreted by other cells. Some VEGF isrequired to keep endothelial cells happy and living. Taking it away completely causes endothelial cells to self destruct. Higher levels of VEGF can help to activate endothelial cells of existing blood vessels to divide and start growing new branches from the pre-existing vessels. Other non-neural cells of the retina, called glial cells, have the ability to sense the level of oxygen in the retina and if they sense low oxygen then they produce larger amounts of VEGF. In children and adults with diabetic retinopathy, older adults with AMD (Age-related Macular Dystrophy), and pre-mature babies with ROP (Retinopathy of Pre-maturity), regions of the neural retina become starved of oxygen, resulting in the production of higher levels of VEGF. This drives the formation of new blood vessels, a process called neovascularization. While this can bring a blood supply to the area and more oxygen, neovascular growth also results in blood vessels that are not and robust like the blood vessels formed during normal retinal development. When all goes according to plan, normal retinal vessels are completed by the time human babies are born at full-term. When babies arrive pre-maturely, the formation of the blood supply to the neural retina may be partially or mostly incomplete depending on how premature the time of birth. Human babies get at least one important growth factor for their retinal vasculature from the mother’s blood supply. As blood vessels normally develop by spreading from the optic nerve, near the middle of the retina, toward the periphery of the retina; growth is interrupted and the peripheral retina does not get its badly needed blood supply. However, photoreceptor cells still try to mature and become active for detecting light and they also create a demand for oxygen. This results in glial cells sending out VEGF signals to attract the growth of vessels into the oxygen starved peripheral retina. The new vessels that result can be driven to grown in a rather disorganized fashion, and cells that normally coat the endothelial cells, called pericytes, fail to organize around the neovessels. Thus, the neovessels are weak and tend to be rather leaky. These growing edges of leaky vasculature can leak, clot, form fibers, and contract and tear the retina away from the back of the eye. This process leads to blindless. For different reasons, oxygen starvation can also occur in diabetic retinas and AMD patient’s retinas. Again, elevated VEGF levels can attract neovascular growth, with similar leakiness, fibrosis, retinal edema (fluid based swelling), and loss of retinal fuction (blindness). The discovery of VEGF and its roll in blood vessel growth is rather recent, since the 1990’s, and drugs in the form of antibodies to VEGF were developed to bind and block the action of excess VEGF. At first developed to try to block the formation of blood vessel supplies to tumors, VEGF-blockers or traps have found growing use in treated retinal edema and neovascular growth in AMD and more recently diabetic retinopathy. There is, expectedly, interest in using VEGF-blockers in ROP. However, VEGF is also a growth factor required to grow the normal retinal vasculature and required to keep the mature vasculature stable. Some ophthalmologists around the world have been testing these drugs out in ROP eyes. Unfortunately, we really have no guidance on what the detrimental effects could be of VEGF-traps in the immature premature baby’s eye. This paper from our Pediatric Retinal Research Lab represents some of the first exploration of using a VEGF trap in the mouse oxygen induced retinopathy model. In this popular research model, new born mice develop central areas of retina that loose blood supply, creating oxygen starved retina. Then this creates, in turn, elevation of VEGF and the formation of neovascular growth response. We learn in this paper that the timing of using VEGF-traps is important in premature eyes, because VEGF-traps can also drop the VEGF concentration very low, maybe too low and impede the repair and recovery of the vasculature of a developing retina. Thus, we need to proceed with caution to evaluate use of these drugs in premature baby eyes, even though they are being used quite often now in eyes of elderly adults with AMD.
Renal disease (also well known as diabetic nephropathy) is a microvascular complication caused by diabetes. According to research, 40% of diabetic people will develop nephropathy at some stage. Individuals who have either Type 1 or Type 2 diabetes are at greater risk if they have high blood glucose and high blood pressure problems for more then 10 years. By maintaining long-term blood glucose and blood pressure levels to within acceptable ranges onecan decrease risk of nephropathy. What is the causes of diabetic nephropathy? Kidneys have very thin and fragile blood vessels which are responsible for filtering waste from your blood. Glucose is a relatively large molecule compared with these blood vessels and high levels of glucose can damage them. Other health issues such as high cholesterol and blood pressure problems, then you have will cause further damage to your vessels. If you smoke or drink to excess then the risk is increases further. To protect your kidneys from renal disease a change in life style is needed. First control your blood glucose and blood pressure levels, and then stop smoking or drinking immediately.
Fall is coming and for a large part of the country, that can mean one thing—local high school and college football! Every year, thousands of young men put on the pads and helmets and step out onto the gridiron to compete. Unfortunately, there are also some serious consequences for these young athletes. For instance, every year there are about 300,000 sport-related concussions suffered by athletes. And that is just for those in high school, not college or professional football players! So for those concerned students and their parents who are thinking about high school football and worrying about the lasting ramifications, here are some things that you need to know about the sport and concussions. What is a concussion? A concussion, also referred to as a mild traumatic brain injury, can result from several different types of incidents, most notably sports injuries and automobile accidents. It is most commonly referred to as a physical strike to the head that causes the brain to hit the inner wall of the skull, causing a disruption to the brain and how it functions. Many with concussions may describe it as feeling like they have “had their bell rung” and they may suffer from difficulty thinking clearly or remembering things or events. What does this have to do with Chronic Traumatic Encephalopathy (CTE)? This is a brain disease that afflicts military vets and athletes who have had repeated brain trauma such as concussions. Over time, multiple traumas cause a protein (Tau) to cluster and clump in the brain. As this happens, the healthy brain cells die and the patient begins to suffer from decreased mental capacity. CTE can also greatly affect a person’s mood and violent mood swings, from depression to aggression, can be attributed to this affliction. How should a concussion be treated? One of the main methods for treating a concussion is with time. By stepping away from the activity that caused the concussion, the patient has a better chance of recovering. This usually involves rest for several days and a close supervision to ensure that the patient is not dizzy, disoriented, or nauseous. It is also important that the concussion is properly diagnosed as soon as possible. In sports, this would be done by an experienced athletic trainer with special preparation in dealing with concussions. Unfortunately, only about 1/3 of all high schools actually employ such a trainer. That means the diagnosis will possibly be left up to an emergency room doctor who may not know all of the causes of the trauma and who may not be able to follow-up with the patient in the long-term as he or she recovers. The problems of concussions and chronic traumatic encephalopathy (CTE) have become more and more well known in recent years. Following the groundbreaking lawsuit filed against the NFL, there has been more and more emphasis placed on stopping this problem and helping younger people avoid the long-term damage. Parents should be aware of this before making the decision to let their children play a contact sport that could result in concussions. If you feel that you or a loved one has been misdiagnosed or could have a medical malpractice case, contact the Law Offices of Wolf & Pravato today! Post updated on September 21, 2017 Other articles you might be interested in: What is the Difference Between an Acquired Brain Injury and a Traumatic Brain Injury? Sports Injuries Can Lead to Traumatic Brain Injuries Mobile Apps for People with Brain Injuries
December 5, 2010 — (BRONX, NY) — Scientists at Albert Einstein College of Medicine of Yeshiva University have made an unexpected finding about the method by which certain genes are activated. Contrary to what researchers have traditionally assumed, genes that work with other genes to build protein structures do not act in a coordinated way but instead are turned on randomly. The surprising discovery, described in the December 5 online edition of Nature Structural and Molecular Biology, may fundamentally change the way scientists think about the way cellular processes are synchronized. All cells contain protein complexes that perform essential functions, such as producing energy and helping cells divide. Assembling these multi-protein structures requires many different genes, each of which codes for one of the proteins that, collectively, form what’s known as the protein complex. Ribosomes, for example, are the vitally important structures on which proteins are synthesized. (The ribosomes of humans and most other organisms are composed of ribonucleic acid (RNA) and 80 different proteins.) Scientists have long assumed that genes involved in making such complex structures are activated in a highly-coordinated way. “What we found was rather astonishing,” said Robert Singer, Ph.D., professor and co-chair of anatomy and structural biology, professor of cell biology and of neuroscience at Einstein and senior author of the study. “The expression of the genes that make the protein subunits of ribosomes and other multi-protein complexes is not at all coordinated or co-regulated. In fact, such genes are so out of touch with each other that we dubbed them “clueless” genes.” Robert Singer, Ph.D.Gene expression involves transcribing a gene’s deoxyribonucleic acid (DNA) message into molecules of messenger RNA, which migrate from the nucleus of a cell into the surrounding cytoplasm to serve as blueprints for protein construction. To assess the coordinated expression of particular genes, Dr. Singer and his colleagues measured the abundance of messenger RNA molecules transcribed by those genes in individual cells. The messenger RNA molecules made by clusters of clueless genes exhibited no more coordination than the messenger RNA from totally unrelated genes did. The “clueless” genes coding for ribosomes and other multi-protein structures are referred to as housekeeping genes, since their essential tasks require them to be “on call” 24/7, while other gene clusters remain silent until special circumstances induce them to become active. The researchers found that these induced genes, in contrast to the “clueless” housekeeping genes, act in an expected (well-regulated) way. For example, growing yeast cells in nutrient media containing the sugar galactose triggered the highly-coordinated expression of the three genes required to metabolize galactose. “Our findings show that for a major class of genes – those housekeeping genes that make ribosomes, proteasomes and other essential structures – cells employ very simple modes of gene expression that require much less coordination than previously thought,” said Saumil Gandhi, the lead author of the study. “Those genes become active randomly, with each member of a functionally related gene cluster encoding a protein while having no clue what the other genes in the cluster are doing. Yet the cell somehow manages to deal with this randomness in successfully assembling these multi-protein complexes.” The paper, “Transcription of functionally related constitutive genes is not coordinated,” appears in the December 5 online edition of Nature Structural and Molecular Biology.
Law and custom in seventeenth-century New England gave male property owners authority over the women, children, and other dependents of their families. Women who spoke up or stood out merited suspicion, and many were accused, prosecuted, and occasionally executed for the crime of witchcraft. Women could be excommunicated, as Ann Hibben was in 1641, for “usurping” her husband’s role, or, as Anne Yale Easton was in 1644, for expressing “unorthodox opinion.” During the notorious Salem Village trials of 1692, magistrates put credence in rampant accusations of witchcraft by hanging 19 people, fourteen of them women. Anne Hutchinson, a prominent Boston woman, was tried and banished from Massachusetts in 1637 after attracting a religious following and “casting reproach upon the faithful Ministers of this Country.” Although Hutchinson was never accused outright of being a witch, the delivery of a deformed, stillborn infant to one of her female associates in 1638 was interpreted by the Puritan fathers as the Devil’s work. This illustration from an eighteenth-century chapbook (a cheaply printed pamphlet) presented a “monstrous” birth as a sign of witchcraft. Source: John Ashton, Chap-books of the Eighteenth Century (1882)—Prints and Photographs Division, Library of Congress.
A begonia flower is monoecious, meaning it contains both male (staminate) and female (pistillate) parts. Flowers from begonias belong to a class of plants called angiosperms. It’s in the flower of a plant where sexual reproduction in angiosperms takes place. Because angiosperms have reduced male parts and fewer female parts, it takes less time between the process of pollination and fertilization. The stamen, which forms the male reproductive unit, consists of the anther and the filament. Usually the anther has two lobes. The anther is where pollen grains are produced. Pollen is a coarse powder that contains the microgametophytes of seed plants. These microgametophytes produce the male sperm cells known as gametes. When an anther is young, it consists of a build-up of undifferentiated, thin, walled cells enclosed by an epidermis. As the anther matures, it develops into four lobes with the lobes joined by a sterile tissue called the connective. Within each lobe is an elongated chamber known as a pollen sac. The female reproductive unit, known as the pistil, consists of three basic parts. The basal swollen part is called the ovary. The style is the cylindrical, narrow extension of the pistil. At the tip of the style is the stigma, which is a terminal receptive disc. The ovule starts developing as a tiny swelling on the placenta, forming a thick cell mass called the nucellus. As growth and development continues, the nucellus is elevated on the funiculus, which is a short, stalk-like structure. As the ovules further develop, protective layers grow from the the base of the nucellus called chalaza, which surrounds the nucellus (except for the narrow opening, called the micropyle). This opening is used as an entryway for a pollen tube into the ovule. Petals and Sepals Petals are the bright-colored portion of a flower surrounding the stamen and pistil. Considered the showpiece of a flower, petals attract not only humans, but also insects and birds. Sepals, which are usually green, typically lie below the petals and are leaf-like. Sepals serve as a temporary protective cover for an unopened flower. When a flower’s petals are ready to unfold, sepals will fold back. According to the website Science.Jrank.org, wildflower begonias--which are not related to cultivated begonias--lack petals. Although wildflower begonias don’t have petals, their colorful sepals look like petals. What’s more, plant breeding of begonias has produced numerous showy flower begonia varieties.
Learning about food During our health lesson we were learning about food hygiene, the eatwell plate, cutting techniques (claw grip and bridge hold). We also learned how to prepare food. We made cous-cous, frozen yogurt, smoothies and orange sorbet. FOOD HYGIENE: We learned lots of way the wash our hands using the following technique. First of all you put soap on your hands, then you rub your hands together going back and forward. Then you clench your hands together, rub on your thumbs to make sure they're clean, rub one of your hand on top of the other and then vice versa. Finally wash all the soap off your hands and you will have sparkling clean hands! CUTTING TECHNIQUE: We learned different ways to cut your fruit and vegetables, these ways are also very safe. We learned two different ways to cut our food. The first one we learned was the Claw Grip. The other was named The Bridge Hold. THE EATWELL PLATE: The eatwell plate is split into sections e.g fruit and vegetables, proteins, fats, milk and dairy, carbohydrates and sugary foods. It helps us to improve our diets by giving us guidance on how much of each thing to eat. 5 A DAY: We know about the importance of eating our 5 a day and have learned lots of tips to increase this, such as drinking juice or smoothies, or adding a spoonful of dried fruits such as raisins to our breakfast. We kept a 5 a day diary at home. COOKING FUN!: We enjoyed making a variety of different dishes using fruit and vegetables. We learned some tips such as how to look for reduced salt options in tinned veg, and how to avoid tinned fruit due to hidden sugar in the syrup. We flavoured sweet things with vanilla and honey instead of sugar and we used herbs and spices instead of salt in our savoury dishes. We tasted everything that we made!
Astronomers have announced the detection of the imprint of primordial 'gravitational waves' that originated in the Big Bang that created our Universe 13.8 billion years ago. The discovery has been hailed as a milestone in science, but the concepts involved will be unfamiliar to many people. Help is at hand. Here are some FAQs on gravitational waves and their answers. What is the significance of the BICEP2 announcement? Scientists will be unravelling the consequences of this discovery for years. But some major implications are already clear: - Albert Einstein predicted 'gravitational waves' nearly 100 years ago, but he also calculated that they would be extremely feeble, so much so that he thought they would never be detected. BICEP2's findings are the most convincing evidence — short of direct detection — that gravitational waves actually exist. - The waves are the confirmation of a cornerstone theory of the standard picture of cosmology. This theory, called inflation, says that during the first moments of its existence, the Universe underwent a brief period of exponential expansion. - During inflation, the Universe's temperature — and thus the energies reached by elementary particles — were trillions of times higher than can be achieved in any laboratory, even in particle accelerators such as the Large Hadron Collider at CERN, near Geneva, Switzerland. - Because inflation is a quantum phenomenon and gravitational waves are part of classical physics, gravitational waves establish a link between the two, and could be the first evidence that gravity has a quantum nature just like the other forces of nature (see 'How to see quantum gravity in Big Bang traces'). What are gravitational waves? Gravity, according to Einstein's general theory of relativity, is how mass deforms the shape of space: near any massive body, the fabric of space becomes curved. But this curving does not always stay near the massive body. In particular, Einstein realized that the deformation can propagate throughout the Universe, just as seismic waves propagate in Earth's crust. Unlike seismic waves, however, gravitational waves can travel in empty space — and they do so at the speed of light. If you could watch a gravitational wave head-on as it moves toward you, you would see it alternately stretching and compressing space, in the up–down and left–right directions (see video). Is inflation the only thing that can produce gravitational waves? No. Anything that's massive and is undergoing violent acceleration is supposed to produce them. In practice, the only gravitational waves that we might be able to directly measure would be those from cataclysmic events such as two black holes colliding and fusing into one. Several observatories around the world are trying to pick up the distant noise of such black-hole mergers. Why couldn't gravitational waves be measured directly, but only detected via a radiotelescope? The gravitational waves that originated during inflation are still resonating throughout the Universe. But they are probably now too feeble to measure directly. Instead, scientists look for the imprint the waves have left in the broth of elementary particles that pervaded the Universe around 380,000 years after the Big Bang, which we see via the 'cosmic microwave background'. Observations of the microwave background radiation are made using telescopes that detect radio waves, and so the 'ripples' in the background caused by gravitational waves could only be detected by a radiotelescope. Why was the discovery made at the South Pole? The Amundsen–Scott South Pole Station, which hosts BICEP2, sits on the Antarctic ice sheet at more than 2,800 meters above sea level, so the atmosphere is thin. The air is also very dry, which is helpful as water vapor blocks microwaves. And Antarctica is also virtually uninhabited, so there is no interference from mobile phones, television broadcasts, and the rest of our electronic paraphernalia.
Because water is commonly available in fairly pure form, it has historically been used as a reproducible standard for defining physical quantities. Most of those old standards using water have been superseded by more precise standards. However, it is still interesting and instructive to trace the ways in which water has been used as a measurement standard. Probably the most familiar such use of water is in connection to the temperature scale. The Celsius (sometimes called Centigrade, though use of that term is no longer considered correct) temperature scale was originally defined so that the freezing point and boiling point of pure water, both at one atmosphere pressure, were 0 and 100 degrees, respectively. This definition ceased to be valid with the adoption of a new International Temperature Scale in 1990. The thermodynamic definition of temperature is based solely on the behavior of an ideal gas; also one fixed point is needed to set the size of the degree. The fixed point used is the "triple point" of water, which is the pressure/temperature condition where solid, liquid, and vapor all coexist. This is used because the triple point is a unique condition that can be precisely reproduced; water's triple point is specifically chosen because it is relatively convenient to realize in the laboratory. The temperature of the triple point of water is defined to be exactly 273.16 kelvins (where 0 K is the absolute zero of temperature). While this completely determines the thermodynamic temperature scale, temperature measurements require approximating the thermodynamic temperature by a "practical" scale that contains other fixed points at which instruments can be calibrated. Temperatures are assigned to these points based on the best scientific estimate of their true thermodynamic temperatures, and procedures are specified for interpolating between the fixed points. While previous temperature scales used the atmospheric boiling point of water as a fixed point (assigning it 373.15 K, which is 100 degrees Celsius), the reproducibility of that point is not as good as other choices. The new International Temperature Scale adopted in 1990 (known as ITS-90) covers this region with the solid/liquid equilibrium (melting/freezing) points of gallium (302.9146 K) and indium (429.7485 K). On ITS-90, the atmospheric boiling temperature of water turns out to be approximately 373.124 K (99.974 degrees Celsius). So, have the properties of water changed? Of course not. What has changed is our ability to precisely determine temperatures in closer approximation to the true thermodynamic temperature. It turns out that the true temperature of water's boiling point is not quite what people thought it was when the Celsius scale was first defined long ago. It is sometimes asked why one could not redefine the temperature scale so that the familiar 0 and 100 degrees Celsius would still hold for the freezing and boiling points of water. This could be done, but it would require changing the size of the degree; this would distort another familiar relationship because the difference between absolute temperature in kelvins and the Celsius scale would have to become approximately 273.22 rather than the familiar 273.15. Also, such a definition would require changing the whole scale if more precise measurements were ever made for water's boiling point. It is better to base temperature on fundamental physics (in this case, the laws of thermodynamics applied to an ideal gas) and use one precisely reproducible point (such as water's triple point) to define the scale. In this way, water is still an important part of defining the temperature scale, but it is the triple point, rather than the freezing and boiling points, that is used. Of course for most practical uses, it is an adequate approximation to think of water as boiling at 100 degrees Celsius rather than 99.974. The other important historical use of water as a measurement standard has been in the definition of mass. The gram was originally defined as the mass of one cubic centimeter of water at some standard condition. However, mass is now referenced to the standard kilogram, which is a platinum/iridium cylinder kept in Paris. This is advantageous because it is independent of the standard of length and because a solid is easier to weigh precisely than a liquid. Careful measurements have shown that liquid water at its density maximum has a density slightly less than 1 g/cm3; the currently accepted number is 0.999975 g/cm3. For more on the fundamental definitions of SI units, see the NIST Reference on the International System of Units (SI). Updated June 15, 2000
Music, the visual arts, and dance can be used to develop critical thinking multiple routines can be applied to the process of teaching critical thinking skills. Creative thinking is the process we use to generate new ideas, products, services and innovations it is essentially the act of changing, reapplying, merging and. More than just making connections, the art students had to use their critical thinking skills not only to understand all the information and nuances of their public. Understanding the role of critical and creative thinking in the creative arts (visual creative and critical thinking may very well be different. Confidence and skills to use critical and creative thinking combines two types of thinking – critical thinking and creative be highly visual. Strated improved visual diagnostic skills in medical students who par- time fosters critical, creative, and flexible thinking vts—visual thinking strategies. About visual leap our mission our critical thinking and the mission of visual leap is to improve teaching and learning through visual thinking strategies. H&m waterproofing sdn bhd(952436x) a malaysian company providing waterproofing, floor coatings, pu grouting and concrete repair solutions to meet any construction. One school’s approach for developing critical thinking skills from reception creative producers and visual learners: visual images are critical in their. The synergy that occurs between creativity and critical thinking allows powerful learning to occur. Critical thinking in the arts critical thinking in the arts will foster critical thinking skills in relationship to performing and visual arts students will learn. Preparing creative and critical visual display make process-based program that helps young people build lifelong skills in creative and critical thinking. A compare & contrast chart of critical and creative thinking demostrates a compare/contrast chart and compares critical and cceative thinking. 81 fresh & fun critical-thinking activities engaging activities and reproducibles to develop kids’ higher-level thinking skills by laurie rozakis. To expand and improve critical and creative thinking taking us on a stimulating visual tour the second key to teaching critical thinking skills is to ensure. Critical thinking skills charts using blooms taxonomy to format nursing questions find this pin and more on creative and critical thinking by createabilities. Ebscohost serves thousands of libraries with premium essays, articles and other content including visual thinking strategies = creative and critical thinking get. Aesthetic development and creative and critical thinking skills study visual thinking strategies is a research-based education nonprofit that believes. Teaching critical thinking skills to fourth grade students identified as critical thinking in everyday critical thinking in everyday life: 9 strategies. Using visual art activities forcreativity encompasses creative and critical thinking this aspect focuses on creative thinking skills and is included in. Optometric education 125 volume 36, number 3 / summer 2011 visual mapping to enhance learning and critical thinking skills héctor c santiago, od, phd, faao. Providing a forum and resources about socratic questioning, higher order thinking, and critical thinking organizer of conferences and publisher of books and academic. Implementation of visual thinking strategies (vts) into the camelot intermediate school curriculum in brookings, south dakota, has fostered the development of.
The first round of discussions begins with this: Since the early 1990s, school districts across the United States have invested tens of billions of dollars in educational technology. As a result, computers have become an integral part of the learning experience in many elementary and secondary schools. As more schools integrate 1-to-1 computing into the classroom, it is increasingly important to determine how the devices are used, how ubiquitous computing changes the learning experience, and how teachers integrate available technology into curricula. --ubiquitous computing evaluation consortium To answer the questions posed in this introduction, I chose to read the following articles: --The goals of ubiquitous computers are to: reduce economic inequity, raise student achievement through specific interventions and transform the quality of instruction. --Bette Manchester, who oversees the Maine Learning Technology Initiative, has said, “There needs to be a leadership team that looks at things through three different lenses: the lens of curriculum and content; the lens of the culture of the building; and the lens of technical needs.” --Teachers who believe that students are capable of completing complex assignments on their own or in collaboration with peers may be more likely to assign extended projects that require laptop use and to allow students to choose the topics for their own research projects. Teachers who view technology as a tool with a wide variety of potential applications are more likely to use laptops often with students. Those teachers who believe that there are adequate software and Internet-based resources available to to help teach their particular content area may use laptops with students more often than teachers who believe that there are simply not enough high-quality materials available.
(b) Mandelonitrile may be obtained from peach flowers. Derive its structure from the template in part (a) given that X is hydrogen, Y is the functional group that characterizes alcohols, and Z characterizes nitriles. Hey everyone. So, in this problem we are going to be doing our mandelonitrile. Now, it told us that we're going to use this template and we need to fill in our x y and z. So, pretty straightforward, so what does it say that X was? it said it was just a hydrogen. So, all we need to do is include an H right there. Now, what does it say y in zero? Well, for what y it said it characterizes an alcohol and Z characterized a nitrile, so when alcohol we know is one to be represented as OH, right? to include OH there. Now, what is the nitrile look like, there's a nitrogen involved and it's actually a triple bond to a carbon, so we have a carbon that's triple bonded to our nitrogen group and this will be our mandelonitrile, so let's write that in, okay? So I hope this made sense, we were just filling an X Y and Z for the function of groups they told us.
The labyrinth is a part of the inner ear that contains the organs of balance (the semicircular canals and otolithic organs). If it becomes swollen and inflamed you may develop Labyrinthitis ("lab-uh-rin-THYtus"). The inflammation may cause sudden vertigo because nerves from the Labyrinth start to send incorrect signals to the brain that your body is moving, but your other senses (such as vision) don't detect the same movement. The confusion in signals can make you feel that the room is spinning or that you have lost your balance (vertigo). Vertigo is not the same as feeling dizzy. Dizziness means that you feel unsteady or lightheaded. But vertigo makes you feel like you're spinning or whirling. It may make it hard for you to walk. Labyrinthitis may also cause temporary hearing loss or a ringing sound in your ears (tinnitus). Your doctor may also call this vestibular neuritis. The two problems have the same symptoms and are treated the same way. These problems are typically thought to be related to a viral infection - although this has not been proven. Some cases of labyrinthitis can be brought on by fluid or infection in the middle ear (behind the eardrum). Your doctor will try to differentiate between the two. Hearing tests and imaging studies such as a CT scan or MRI may be ordered. In some cases labyrinthitis, is not obvious during an ear exam, so a complete physical exam, including a neurological evaluation, should be performed. Symptoms of labyrinthitis can mimic those of other conditions, so your doctor may order tests to rule them out. Some conditions that mimic labyrinthitis include Meniere's disease (an inner ear disorder) migraine small stroke and brain tumor. Tests to make an accurate diagnosis may include hearing tests (labyrinthitis is more likely if you have hearing loss) blood tests, a CT or MRI scan of your head and an electroencephalogram (EEG), which is a brain wave test. Your doctor will also check your eyes. If they are flickering uncontrollably, it is usually a sign that your vestibular system (the body's balancing system) is not working properly. In some cases, vestibular neuritis/ labyrinthitis may go away on its own. This can takes several weeks. If the cause is a bacterial infection, your doctor will give you antibiotics, but most cases are caused by viral infections, which can't be cured with antibiotics. Initial treatment often involves steroid medicine and antiviral medication in attempt to shorten the duration and severity of symptoms. Medicines for the symptom of dizziness and nausea can also be used. In some cases where imbalance persists for a long period, a course of vestibular therapy (balance therapy) may help resolve the imbalance more rapidly than otherwise. In rare cases were symptoms persist for 12 months or longer, labyrinthectomy - removal of the inner ear - or vestibular neurectomy - cutting of the balance nerve - could be considered. In addition to taking medications, there are several techniques you can use to relieve vertigo. Balance exercises such as simple head movements and keeping your balance while standing and sitting may reduce symptoms of vertigo. Vertigo usually gets better as your body adjusts (compensation). Medicines like antihistamines can help your other symptoms, but they may make it take longer for vertigo to go away. If your vertigo continues for a long time, physical and occupational therapists can teach you exercises to help improve balance. Valley ENT has 22 locations across the Phoenix, Scottsdale, East Valley, West Valley, and Tucson areas to serve you better.
03:00 PM to 04:15 PM MW Section Information for Fall 2011From their earliest colonial beginnings to the present day, Americans have defined themselves, their sense of national purpose, and even their nation itself in terms of their physical environment. In the eighteenth century, Americans insisted that their dominance over an untamed wilderness would serve as an example to rest of the world of the new nation’s moral virtue. Later, the market revolution of the nineteenth century provoked an “ecological revolution” in which Americans recast their natural environment as an endless cornucopia to be exploited and developed. And in the decades following the industrial revolution, Americans again recast the natural world as a fragile space to be conserved, preserved, and protected. Throughout, Americans transformed their national landscape in ways both profound and profane. In tracing these transformations, the course will pursue three goals, or directions. First, we will ask how the natural world and natural resources have historically shaped patterns of American life. Second, we will ask how Americans have given meaning to the world around them, and how those meanings have both governed Americans’ relationship with the environment and how they have changed over time. Finally, we will seek to understand how Americans have altered the landscape around them to suit their notions of nature, wilderness, and environment, and the political consequences of those decisions. View 7 Other Sections of this Course in this Semester » Study of historical topics or periods of special interest. Topics announced in advance. May be repeated for credit when topic is different.
Definition - What does Sewage mean? Sewage is waste material that is carried through a sewer from a residence or an industrial workplace to be dumped or converted to a non-toxic form. Sewage is more than 99% water, but the remaining material contains solid material, ions and harmful bacteria. This matter must be extracted from the water with a filtration process before the sewage can be released back into a natural water source. Safeopedia explains Sewage Large solid matter (more than 2cm in diameter) in the sewage is removed by a screening process and disposed of in a landfill. The water is then left to settle, and remaining solids sink to the bottom while oils and greases float to the top. These are scraped from the bottom and sprayed from the surface with water. The waste material is chemically treated with bacteria, which convert it into bio-gas that is in turn used to power the sewage plant. The water is also treated with bacteria then UV light to destroy harmful bacteria before it is released into the sea. Sewage is a major cause of disease in a community and it's effective management, therefore is important for health and safety.
In the nanoscale world, nanoparticles are measured in billionths of a meter, which often make them only a little bit larger than the size of atoms. Because these nanoparticles are typically smaller than the wavelengths of visible light––which varies from 700 nanometers for red light to 400 nanometers for violet light–– they are literally invisible to even the most powerful optical microscopes. Now, scientists at Los Alamos have constructed a novel device for "seeing" tiny metal nanoscale particles by combining sub-wavelength, near-field imaging with broadband interference spectroscopy that uses the high-intensity illumination produced by an ultrafast laser–– a laser that emits pulse durations lasting only of a few quadrillionths of a second. The technique could help scientists around the world gain a deeper understanding of the largely unseen nanoscale world. The design of the device, along with details of how it was recently used for studies of collective oscillations of electrons in individual gold nanoparticles and their assemblies, are discussed in last week's issue of the journal Optics Letters. The technique begins by directing light through a thin optical fiber that has been previously heated and stretched until, like stretched taffy, the middle becomes far thinner than the ends. This tapered fiber is then cut at its thinnest diameter and clad in aluminum to create––in effect, a tiny "nanoscale flashlight" –– with an aperture only 50 to 100 nanometers across. "Since there are no white-light lasers that would make it possible to "see" nanoparticles in more than one wavelength of the visible light spectrum," says Victor Klimov, leader of the research team, "it was necessary for our team to create a high-intensity illumination source for the optical fiber. We did this by focusing the beam of an ultrafast laser onto a transparent sapphire plate, which converted the single-wavelength laser output into a broadband spectrum of high- intensity light that is somewhat equivalent to white light and, therefore, is referred to as "femtosecond white-light continuum." The important property of the "femtosecond white-light continuum" is its low, laser-beam-like divergence that allows researchers to efficiently couple it into an optical fiber and to create a high-intensity, multi-color, near-field light source. For use, the "nanoscale flashlight" was positioned just a few nanometers away from a sample mounted on a near-field scanning optical microscope. As the emitted light is transmitted past and through the sample, a photomultiplier tube, a device that amplifies the effect of a single photon to measurable levels, collects and measures it. This signal is used to reconstruct a nanoscale image while the near-field tip is raster-scanned across the sample. At the same time, the transmitted light is also dispersed by a spectrometer and is detected by a CCD recording device to create a broadband absorption/extinction spectrum for each sample point. The combined "multidimensional" data is giving scientists their first real look into the nanoscale world. Because of its ability to both image the nanostructure and to interrogate it spectroscopically, the instrument developed by Los Alamos researchers is ideally suited to guide the design of nanophotonic and nanoplasmonic structures and devices. In addition, this new capability may provide a powerful new tool for "real-time" studies of electronic dynamics at the nanoscale level with high resolutions in both time and spatial domains.
Goals: The Mars 2 mission had two objectives: to place an orbiter around Mars and to put a lander on the Martian surface in working order. Each was to take pictures and collect information about the planet. In addition, the orbiter was to monitor the solar wind and the interplanetary and Martian magnetic fields. Accomplishments: The orbiter circled Mars as desired (a first for the USSR but the second overall, since the Soviet spacecraft arrived at Mars 13 days after America's Mariner 9) and transmitted valuable data to Earth until contact was lost in July 1972. The lander, however, entered the Martian atmosphere at too steep an angle. Its parachute failed to open and the lander crashed onto the Martian surface.
Lung cancer is the leading cause of cancer deaths in both women and men in the United States and throughout the world. Lung cancer has surpassed breast cancer as the leading cause of cancer deaths in women. In the United States in 2007, 160,390 people were projected to die from lung cancer, which is more than the number of deaths from colorectal, breast, and prostate cancer combined. Only about 2% of those diagnosed with lung cancer that has spread to other areas of the body are alive five years after the diagnosis, although the survival rates for lung cancers diagnosed at a very early stage are higher, with approximately 49% surviving for five years or longer. Some lung tumors are metastatic from cancers elsewhere in the body. The lungs are a common site for metastasis. If this is the case, the cancer is not considered to be lung cancer. For example, if prostate cancer spreads via the bloodstream to the lungs, it is metastatic prostate cancer (a secondary cancer) in the lung and is not called lung cancer. Cancer occurs when normal cells undergo a transformation that causes them to grow and multiply without the normal controls. The cells form a mass or tumor that differs from the surrounding tissues from which it arises. Tumors are dangerous because they take oxygen, nutrients, and space from healthy cells. About 90% of lung cancers arise due to tobacco use. Cigarette smoking is the most important cause of lung cancer. Research as far back as the 1950s clearly established this relationship. Cigarette smoke contains more than 4,000 chemicals, many of which have been identified as causing cancer. A person who smokes more than one pack of cigarettes per day has a risk of developing lung cancer 20-25 times greater than someone who has never smoked. However, Once a person quits smoking, his or her risk for lung cancer gradually decreases. About 15 years after quitting, the risk for lung cancer decreases to the level of someone who never smoked. Cigar and pipe smoking also increases the risk of lung cancer but not as much as smoking cigarettes. Most lung tumors are malignant. This means that they invade and destroy the healthy tissues around them and can spread throughout the body. The tumors can also spread to nearby lymph nodes or through the bloodstream to other organs. This process is called metastasis. When lung cancer metastasizes, the tumor in the lung is called the primary tumor, and the tumors in other parts of the body are called secondary tumors or metastatic tumors. Adenocarcinoma (an NSCLC) is the most common type of lung cancer, making up 30%-40% of all cases. A subtype of adenocarcinoma is called bronchoalveolar cell carcinoma, which creates a pneumonia-like appearance on chest x-rays. Squamous cell carcinoma (an NSCLC) is the second most common type of lung cancer, making up about 30% of all lung cancers. Large cell cancer (another NSCLC) makes up 10% of all cases. SCLC makes up 20% of all cases. And finally, Carcinoid tumors account for only 1% of all cases. Lung cancers are usually divided into two main groups that account for about 95% of all cases. These division into groups is based on the type of cells that make up the cancer. About 5% of lung cancers are of rare cell types, including carcinoid tumor, lymphoma, and others. The two main types of lung cancer are characterized by the cell size of the tumor when viewed under the microscope. They are called small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC). NSCLC includes several subtypes of tumors. SCLCs are less common, but they grow more quickly and are more likely to metastasize than NSCLCs. Often, SCLCs have already spread to other parts of the body when the cancer is diagnosed. Up to one-fourth of all people with lung cancer may have no symptoms when the cancer is diagnosed. These cancers usually are identified incidentally when a chest x-ray is performed for another reason. The majority of people, however, develop symptoms. The symptoms are due to direct effects of the primary tumor, to effects of metastatic tumors in other parts of the body, or to disturbances of hormones, blood, or other systems caused by the cancer. Symptoms of primary lung cancers include cough, coughing up blood, chest pain, and shortness of breath. Symptoms of metastatic lung tumors depend on the location and size. About 30%-40% of people with lung cancer have some symptoms or signs of metastatic disease. A cough that does not go away or gets worse over time should be evaluated by a health-care provider. Also, Coughing up blood (hemoptysis) occurs in a significant number of people who have lung cancer. Any amount of coughed-up blood is cause for concern. Chest pain is a symptom in about one-fourth of people with lung cancer. The pain is dull, aching, and persistent and may involve other structures surrounding the lung. Additionally, shortness of breath usually results from a blockage to the flow of air in part of the lung, collection of fluid around the lung (pleural effusion), or the spread of tumor throughout the lungs. Wheezing or hoarseness may signal blockage or inflammation in the lungs that may go along with cancer. Finally, Repeated respiratory infections, such as bronchitis or pneumonia, can be a sign of lung cancer. By weinstock from Pixabay
A new study shows that noise pollution from humans has doubled in over half of protected areas in the US, including state and national parks, as well as local nature areas. Some of these locations have become 10 times louder, according to the study. These changes threaten both the ability of animals to hunt and forage, as well as human well-being. The study mapped noise pollution across the US, and could help scientists identify specific areas that need to be kept quiet, such as habitats for endangered species. University of Colorado ecologist Nathan Kleist, who was not involved in the study, said it was a “call to arms.” He added: “If you’re missing noise, you’re missing a huge driver of habitat suitability.” Noise pollution from human activity can affect human health by disturbing sleep and adding to stress and concentration issues. The 1972 Noise Control Act granted the Environmental Protection agency the authority to set limits on noise from cars and construction. However, these rules have largely remained unenforced in parks and wilderness areas, which account for 14 percent of the country. 80 percent of the US is now within 1 kilometer of a road, due to the growth of industrial and residential areas. In the study, National Park Service and Colorado State University in Fort Collins (CSU) researchers recorded noise levels at 492 locations with various levels of protection. Those recordings were used to predict noise levels in protected areas nationwide. The researchers also used a computer model to estimate the ambient noise that would naturally occur at each location. They compared noise levels in protected areas without noise from human activities to the noise that would occur there naturally. They found that noise pollution doubled noise levels in 63 percent of protected areas. In 21 percent of these sites, there was a 10-fold increase. The research was published Thursday in the journal Science. Lead author of the study Rachel Buxton, a conservation biologist at CSU said the researchers “were surprised we found such high levels of noise pollution in such high amounts of protected areas.” Increased noise levels can disrupt animal communities, in the process interfering with seed dispersal for some plants. Birds who communicate with songs, prey who must listen for predators, and other animals are all affected by noise pollution. Clinton Francis, an ecologist at California Polytechnic State University in San Luis Obispo, who was not involved in the study, said “In so many landscapes, both people and other organisms are living in shrunken perceptual worlds.” For humans, excess noise can cancel out the benefits of time spent in natural areas, such as better mood and memory.
As stated in the Webster's 1913 definition, Hysteria can bring rise to imaginary sensations. One method of demonstrating that somebody has Hysteria is by proving that some sensation the patient reports is physiologically impossible. A classic way of doing this requires a tuning fork. The medical practitioner strikes the fork and presses the handle sequentially to two points on the patient's head: about six inches above each eye on the top of the head. The patient might report that they hear the fork louder in one ear than in the other. If they are not hard of hearing, this is impossible, and they have Hysteria. The reason for this is that the sound generated by the tuning fork is not heard via normal air conduction, but rather by bone conduction. The skull is a solid bone, which enables it to conduct sound more proficiently than air. In air conduction, the sound dissipates quickly and it is understandable that something heard in one ear might not be heard at all in the other. Given the nature of bone, however, the sound waves are conducted at equal intensity to the sound sensing cochlear nerves of both ears. As such, a person should hear the fork at equal volumes in both ears both times the fork is applied. Hysteria is what Sigmund Freud would call a conversion or somatization disorder, wherein the patient suffers psychological stress which manifests itself as a physical problem. The term Hysteria is actually rather vague, and refers to a broad category of disorders called hysterical disorders. Sometimes, actual physical disorders are present, which must be ruled out before making the diagnosis of a hysterical disorder. Some of these disorders may be diagnosed if a patient reports numbness in only one limb or part of a limb. Similarly, various clinical inventories may be used to predict hysterical tendencies. For better or for worse, these illnesses tend to manifest in vastly different ways. Keep in mind, therefore, that if you suspect somebody of having a Hysterical disorder, tapping them on the head with a tuning fork is not guaranteed to prove anything.
Pole to Pole Pole to Pole A creative cross curriculum project linked to the theme of the North and South Poles. Ideal for Foundation Stage and Key Stages 1, 2 and beyond The North Pole The Arctic is the area located around the North Pole. When referring to the Arctic, people usually mean the part of the Earth within the Arctic Circle. Although there is no land at the North Pole, the icy Arctic Ocean is teeming with life. There is also a lot of land within the Arctic Circle (northern parts of Asia, Europe and North America). Land within the Arctic Circle is called 'tundra', and it supports less life than most other biomes because of the cold temperatures, strong dry winds, and permafrost (permanently frozen soil). Long periods of darkness (in the winter) and light (in the summer) also affect Arctic life. When not otherwise qualified, the term 'North Pole' usually refers to the Geographic North Pole - the northern most point on the surface of the Earth, where the Earth's axis of rotation intersects the Earth's surface. North Pole Climate The North Pole is significantly warmer than the South Pole because it lies at sea-level in the middle of an ocean (which acts as a reservoir of heat), rather than at altitude in a continental land mass. During the winter (January) temperatures at the North Pole can range from about -43oC (-45oF) to -26oC (-15oF), perhaps averaging around -34oC (-30oF). Summer temperatures (June - August) average around freezing point. The sea ice at the North Pole is typically around two or three metres thick, though occasionally the movement of floes exposes clear water. Some studies have indicated that the average ice thickness has decreased in recent years due to global warming. Day and Night at the North Pole During the summer months, the North Pole experiences 24 hours of daylight daily, but during winter months the North Pole experiences 24 hours of darkness daily. Sunrise and sunset do not occur in a 24 hour cycle. At the North Pole, sunrise begins at the Vernal equinox taking three months for the sun to reach its highest point at the summer solstice when sunset begins, taking three months to reach sunset at the Autumnal equinox. A similar effect can be observed at the South Pole with a six month difference. The South Pole When not otherwise qualified, the term 'South Pole' normally refers to the Geographic South Pole - the southernmost point on the surface of the Earth, on the opposite side of the Earth from the North Pole. Other 'South Poles' described include the Ceremonial South Pole, the South Magnetic and Geomagnetic Poles, and the Southern Pole of inaccessibility. The Geographic South Pole is defined for most purposes as one of two points where the Earth's axis of rotation is actually subject to very small 'wobbles'. The projection of the Geographic South Pole onto the celestial sphere gives the south celestial pole. At present, Antarctica is located over the South Pole, although this has not been the case for all of Earth's history because of continental drift. The land (i.e. rock) at the South Pole lies near sea level, but the ice cap is 3000 metres thick so the surface is actually at high altitude. The polar ice sheet is moving at a rate of roughly 10 metres per year, so the exact position of the Pole, relative to the ice surface and the buildings constructed on it, gradually shifts over time. The South Pole marker is repositioned each year to reflect this. South Pole Climate During the southern winter, the South Pole receives no sunlight at all, and in summer the Sun, though continuously above the horizon, is always low in the sky. Much of the sunlight that does reach the surface is reflected by the white snow. This lack of warmth from the sun, combined with the high altitude (about 3200 metres), means that the South Pole has one of the coldest climates on Earth. Temperatures at the South Pole are much lower than at the North Pole. In midsummer, as the Sun reaches its maximum elevation of about 23.5 degrees, temperatures at the South Pole average around -25oC (-12oF). As the year-long 'day' wears on and the Sun gets lower, temperatures drop - sunset (late March) and sunrise (late September) being around -45oC (-49oF). In winter, the temperature remains steady at around -65oC (-85oF). The South Pole has a desert climate, almost never receiving any precipitation. - One of the earliest expeditions to reach the North Pole was that of British naval officer William Edward Parry, who in 1827 reached latitude 82o 45' North. - The Polaris expedition, an 1871 attempt on the Pole led by Charles Francis Hall, ended in disaster. - In April 1895 Norwegian Fridtjof Nansen reached latitude 86o 14' North. - The conquest of the North Pole is traditionally credited to Anglo-American Navy engineer Robert Edwin Peary, who claimed to have reached the Pole on 6 April 1909. However, Peary's claim remains controversial. - The first undisputed sighting of the Pole was on 12 May 1926, by Norwegian explorer Roald Amundsen and his American sponsor Lincoln Ellsworth from the airship Norge. - Sir Wally Herbert led the team that made the first surface crossing of the Arctic Ocean (1968-69) - and its longest axis - a feat that has never been repeated. In so doing the team became the first to reach the North Pole by surface travel. In addition no one alive today has personally surveyed and mapped on the ground a larger area of Antarctica than Sir Wally. He has been awarded the Polar Medal and was knighted in 2000 for services to polar exploration. - The first humans to reach the Geographic South Pole were Norwegian Roald Amundsen and his party on 14 December 1911. Amundsen's competitor, Robert Falcon Scott, reached the Pole a month later. On the return trip Scott and his four companions all died of hunger and extreme cold. In 1914 British explorer Ernest Shackleton's Imperial Trans-Antarctic Expedition set out with the goal of crossing Antarctica via the South Pole but ended in failure. - US Admiral Richard Byrd, with the assistance of his first pilot Bernt Balchen, became the first person to fly over the South Pole on 29 November 1929. - After Amundsen and Scott, the next people to reach the South Pole overland (albeit with air support) were Edmund Hillary (4 January 1958) and Vivian Fuchs (19 January 1958). Download the Pole to Pole project plans: - Literacy Project 1: Letter writing (Foundation and Key Stage 1) - Literacy Project 2: Exciting explorers (Key Stages 1 and 2) - Literacy Project 3: Riotous reindeers (Key Stage 2) - Literacy Project 4: Polar poetry (Key Stages 1 and 2) - Literacy Project 5: Daring diaries (Key Stage 2) - Numeracy Project 1: Freezing maths (Key Stage 2 and 3) - Numeracy Project 2: Miraculous measurements (Key Stage 2+) - Numeracy Project 3: Icy Shapes (Key Stage 1) - Science Project 1: Amazing Arctic animals (Key Stages 1 and 2) - Science Project 2: Amazing Antarctic animals (Key Stages 1 and 2) - Science Project 3: Arctic medical kit (Key Stage 2) - Science Project 4: There may be trouble ahead ( Key Stages 2+) - Science Project 5: How do penguins keep warm? (Key Stages 1 and 2) - Geography Project 1: Let's get connected (Key Stages 1 and 2) - Geography Project 2: Planning an expedition (Key Stages 1 and 2) - History Project 1: A ship's history (Key Stage 2) - History Project 2: An exploration timeline (Key Stage 2) - ICT Project 1: Ice cold word banks (Key Stages 1 and 2) - ICT Project 2: Labelling and classifying (Key Stages 1 and 2) - Art and Design Project 1: Scrimshaw time (Key Stages 1 and 2) - Art and Design Project 2: Polar carvings (Key Stage 2) - wigl – what is good leadership? - wigt – what is good teaching? - sandwell early numeracy test - project-based learning resources - creative teaching and learning - school leadership and management - primary file - every child journal - professional development today - learning spaces - vulnerable children - e-learning update - leadership briefing - manager's briefcase - school business
The topic of discussion for this Constitution Monday comes from the Twelfth Amendment to the United States Constitution: “The Electors shall meet in their respective states and vote by ballot for President and Vice President, one of whom, at least, shall not be an inhabitant of the same state with themselves; they shall name in their ballots the person voted for as President, and in distinct ballots the person voted for as Vice President, and they shall make distinct lists of all persons voted for as President, and of all persons voted for as Vice-President, and of the number of votes for each, which lists they shall sign and certify, and transmit sealed to the seat of the government of the United States, directed to the President of the Senate….” This provision gives citizens the right to vote for President and Vice President. W. Cleon Skousen explained, “The Twelfth Amendment was designed to correct the deficiencies in the electoral college system. Article II, section 1 provided that the electors were invited to vote for `two persons,’ without separately designating either of them for President or Vice President. The idea was that the one who received the most votes would automatically become the President and the second in line would be assigned the office of Vice President. If none of the candidates had a majority, then Congress would select these officers from among the top five candidates….” (See The Making of America – The Substance and Meaning of the Constitution, p. 714.) Charles Fried at The Heritage Foundation explained, “The Twelfth Amendment sets out the procedures for the election of the President and Vice President: Electors cast one vote for each office in their respective states, and the candidate having the majority of votes cast for a particular office is elected. If no person has a majority for President, the House of Representatives votes from among the top three candidates, with each state delegation casting one vote. In the case of a failure of any vice presidential candidate to gain a majority of electoral votes, the Senate chooses between the top two candidates. The procedure for choosing the President and Vice President is set out in Article II, Section 1, Clauses 2-6, of the Constitution. This amendment replaces the third clause of that section, which had called for only a single, which had called for only a single set of votes for President and Vice President, so that the vice presidency would go to the presidential runner-up. In the unamended Constitution, the choice in the case of a non-majority in the Electoral College fell to the House of Representatives, as it does under the amendment, and the runner-up there would be chosen as Vice President.” (See The Heritage Guide to the Constitution, pp. 377-378.)
This is an excerpt from EERE Network News, a weekly electronic newsletter. Engineers Develop Process to Make Hydrogen from Glucose Chemical engineers at the University of Wisconsin-Madison have developed a new process to produce hydrogen from glucose, a sugar produced by many plants. The process shows particular promise because it occurs at low temperatures in the liquid phase, so it does not require the energy needed to heat and vaporize the glucose solution. The low temperature also yields very little carbon monoxide, which can damage fuel cells. In fact, the process produces fuel-cell-grade hydrogen in a single step. However, the researchers note that improvements are needed to improve the hydrogen yields from the process and to reduce the cost of the catalyst. Glucose is manufactured in vast quantities from corn starch, but can also be derived from sugar beets or low-cost waste streams like paper mill sludge, cheese whey, corn stover or wood waste. The research was published in last week's edition of the journal Nature. See the university's press release.
Cultural Diversity: The American Family - Past, Present and Future, by Lorna S. Dils Guide Entry to 90.05.01: This unit looks at the American histories of six different ethnic groups: Black, Hispanic, White, Japanese, Chinese and Native American, and also examines the changing American family through a series of short reading selections, writing activities, and classroom projects. Its goal is to provide the student with historical information about these ethnic groups while also looking at societal pressures on all present day families. It culminates in final writing activity in which students use the information they have gained about the past and present family to predict what the family will be like in the future. This unit is specifically designed for seventh grade students in the Talented and Gifted Program but can be easily adapted to all students in grades six through eight by changing the reading selections. (Recommended for English and Interdisciplinary, grades 6-8)
OUR ROCK REPORTS Grade 4 researched the three types of rocks , the rock cycle and how the earth is changing due to natural earth forces. They created a show to demonstrate their knowledge. We integrated technology with an online tool called Voicethread. This tool is fun and engaging and has the potential to enhance student’s reading, writing,speaking and listening skills which are all key components of the Common Core State Standards. The children were able to listen to themselves read and rerecord which provided them with an opportunity for self-evaluation and growth. Children were highly motivated to use technology and self publish for an authentic audience. Click on the links below to view their projects.We hope you enjoy them as much as we do
As the Stone Age covers around 99% of our human technological history, it would seem there is a lot to talk about when looking at the development of tools in this period. Despite our reliance on the sometimes scarce archaeological record, this is definitely the case. The Stone Age indicates the large swathe of time during which stone was widely used to make implements. So far, the first stone tools have been dated to roughly 2,6 million years ago. The end is set at the first use of bronze, which did not come into play at the same time everywhere; the Near East was the first to enter the Bronze Age around 3,300 BCE. It must be recognised that stone was by no means the only material used for tools throughout this time, yet it is the most stubborn one when it comes to decaying and thus survives a bit better than the alternatives. It is important to realise that the ways chosen to divide up the Stone Age into bite-size chunks (see below) depend on technological development, and not on chronological boundaries. Because these developments did not occur at the same time in all areas, strict date ranges are out of the question. Of course, this method has some difficulties, as the characteristics defining each stone tool culture are determined by us. As with all such artificially constructed ways of classification, they oversimplify things and leave many grey areas, for instance when it comes to transition periods. However, as long as this is kept in mind it is still a useful way of adding some sort of structure to such a hugely long period of time. The Stone Age is conceived to consist of: - the Palaeolithic (or Old Stone Age) - the Mesolithic (or Middle Stone Age) - the Neolithic (or New Stone Age) The Palaeolithic spans the time from the first known stone tools, dated to c. 2,6 million years ago, to the end of the last Ice Age around 12,000 years ago. It is further subdivided into the Early- or Lower Palaeolithic (c. 2,6 million years ago - c. 250,000 years ago); the Middle Palaeolithic (c. 250,000 years ago - c. 30,000 years ago); and the Late- or Upper Palaeolithic (c. 50,000/40,000 - c. 10,000 years ago; some of these cultures persisted into the time when the Northern Hemisphere began warming up again). Furthermore, within these frameworks, various stone cultures are identified, some of which you will find below. The Mesolithic saw humans adapt to the warmer climate, from around 12,000 BCE until the transition to agriculture, which happened at different times in different regions, the earliest of which was around 9,000 BCE in the Near East (which due to its lightning speed sort of skipped the Mesolithic altogether). At the other extreme, farming took until around 4,000 BCE to spread all the way to Northern Europe. The Neolithic, then, has no clear chronological starting point either, but is defined by the move to a more settled way of life based on farming and herding. The introduction of bronze marks the end of the Neolithic, which gradually happened in various areas from around 3,300 BCE onward. The earliest tools A claim went out in 2010 CE that the earliest evidence for tool use should be pushed back to the astonishing age of 3,3 million years ago – well before the first Homo are known to have roamed the earth, the first appearance of which was recently pushed back to around 2,8 million years ago. Our supposed ancestors, the contemporary Australopithecus afarensis, are held responsible for producing marks on bovid bones at a site in Dikika, Ethiopia. Moreover, a discovery in West Turkana, Kenya of stone tools dated to 3,3 million years seems to further bolster the idea that humans might not have been the first tool users. However, a more critical evaluation of both sites has led researchers to reject these claims. The Dikika marks could also have been made by crocodile teeth or by trampling, and the West Turkana site may have suffered from materials from younger layers sliding down into the deposit, resulting in an incorrect date. Until these possibilities have been ruled out, the evidence must be seen as insufficient. This does not mean, though, that humans were the only ones that can be conceived to have used tools. All of the hominins that were around at that early time may have used some sort of stone technology to a greater or lesser extent. Hominins are the group that consist of modern humans, extinct human species, and our immediate ancestors – species that are more closely related to modern humans than to anything else. This includes not only members of the genera Homo, but also of Australopithecus (to which the famous Lucy belongs), Paranthropus, and Ardipithecus. Many anthropologists argue that Homo was likely the more habitual tool user and maker, as its brain size grew so quickly over the first million years after the earliest properly accounted tool use at 2,6 million years ago, and its teeth size declined. This could only have happened if there were tools to compensate for the smaller teeth. It is possibly just a waiting game, though, until the first rock-solid documentation of non-Homo tool use comes to light. Although some animals - like chimpanzees, that are known to use sticks to dig for termites - use some sort of tools, the manufacturing process of these early stone artefacts is unique to hominins. Despite the simplicity of early stone tools they still showcase a deliberate and controlled way of fracturing rock by using percussive blows – something which highlights a definite behavioural innovation. The Early- or Lower Palaeolithic The Early Palaeolithic begins with the first evidence we have of stone (also known as lithic) technology, which has so far been dated to around 2,6 million years ago and stems from sites in Ethiopia. Two industries are recognised in this period, namely the Oldowan and the Acheulean. It lasts up to roughly 250,000 years ago, until the onset of the Middle Palaeolithic. The Oldowan industry is named after Olduvai Gorge in Tanzania and comprises the earliest stone industry visible in our archaeological record. It is characterised by simple cores and flaked pieces, found alongside some battered artefacts like hammerstones, as well as the occasional animal bones showing cut marks. Although there is no clear end point, and it coexisted for some time with the later Acheulean industry (which began around 1,7 million years ago), archaeologists usually draw the finish line around 1 million years ago when referring to the Oldowan. Oldowan sites are first and foremost known from Africa (in places like Ethiopia, Kenya, and South Africa), but are later seen to spread towards the Near East and eastern Asia, probably carried there by Homo erectus. At these sites, simple technologies were used to turn materials such as volcanic lavas, quartz, and quartzite into tools via techniques known as hard hammer percussion and bipolar technique, in which a stone anvil serves as a base to rest the core on while it is hit with a stone hammer. This way, cores were turned into choppers, heavy-duty scrapers and the likes; battered percussors like hammerstones and spheroids; flakes and fragments struck from rotated and manipulated cores; and retouched pieces such as scrapers and awls. It is clear these early humans were skilled and knew how to get the most out of a piece, seeing that sites often show dozens of flaked cores accompanied by thousands of flake products, indicating that many flakes were hammered from the same core piece. These early tools were most likely used to help these humans butcher animals (not always ones they had hunted themselves but likely also scavenged when possible), cut up plants, and even do some woodworking. Crawling into early human skins, researchers have done experiments that have shown that Oldowan flakes allow for a very successful butchering of carcasses ranging in size from small mammals to ones weighing hundreds of pounds, which reflects the range of bones that are typically found at these sites. The nutritious marrow inside the bones, and juicy brains inside strong skull cases, could be retrieved by cracking them open with a hammerstone. Stone is simply pretty good at standing the test of time, but it would not have been the only thing these people used in their daily lives. It is likely that a whole range of material spanning from skin and bark used to create containers; wood used to create digging sticks, spears or clubs; and digging tools made out of horn or bone were also used. While the Oldowan was still in full swing and had just about reached East Asia by the able hands of Homo erectus, Africa became the initial host to a second tool industry: the Acheulean (c. 1,7 million years ago to c. 250,000 years ago and named after St. Acheul in France), which spread far and wide across Eurasia a bit later on. It saw the development of tools into new shapes: large bifaces like hand axes, picks, cleavers and knives enabled the contemporary Homo erectus, and later on Homo heidelbergensis, to literally get a better grip on the processing of their kills and gatherings. These bifaces - i.e. two-faced, with a working surface on two sides - represent a new element in stone toolmaking. They were made from large flakes that were struck from boulder cores or from larger cobbles and nodules. Tools were more extensively shaped than before, as seen in the large range of proficiently created retouched tools such as backed knives, awls and side scrapers. It is the hand axes and cleavers in particular, though, that show the ability for creating symmetrical objects from stone materials, something that indicates a higher cognitive ability as well as motor skills than are visible in the Oldowan industry. More precisely shaped tools meant a more delicate technique was needed; and indeed, softer materials such as wood, bone, antler, ivory, or soft stones, were now used as percussors in what is known as the soft hammer technique. Flint became a popular material, and by working it and the already familiar lavas and quartzites this technique produced thinner flakes that were then refined. The Acheulean industry was successful and very widespread. It is found not only throughout Africa and Eurasia, but all the way to the Near East, the Indian subcontinent, as well as through Western Europe. Here, for the later Acheulean some impressive finds of sharpened wooden spears at Schöningen, Germany (dated to at least 300,000 years ago), and Clacton in England, provide the earliest evidence of active hunting and proper, designated hunting weaponry. They have been attributed to Homo heidelbergensis. Ice Age Europe would have presented some challenges in the shape of sometimes rather frigid weather conditions, especially at certain latitudes, but usage patterns on Acheulean side scrapers suggest that they were used to scrape hides that could then be turned into simple clothing. I would not be surprised if the snuggie blanket turned out to be much older than we think it is. Interestingly, although the shape of hand axes varies widely throughout time and space, certain Acheulean sites show recurrent shapes and sizes that make it seem as if their makers all had a subscription to the same toolmaking magazine, as it looks like they all stuck to similar stylistic norms of production. The Middle Palaeolithic The Middle Palaeolithic (c. 250,000 – c. 30,000 years ago, and sometimes called ‘Mousterian’ after the site of Le Moustier in France) marks a shift away from the boundless popularity of the hand axes and cleavers visible throughout the Acheulean. Instead, the focus came to lie on retouched forms made on flakes produced from carefully prepared cores using what is known as the Levallois technique – a technique which was also used to a small extent in the Early Palaeolithic and the Late Palaeolithic. The use of this technique implied a careful preparation of the flint core by roughing it out first to give it a flattened face and designing a specific striking platform. This way, toolmakers could control the shape of the flake that was to be struck off. From these flakes, retouched forms such as side scrapers, points, denticulates, and sometimes blades were made, which are well-represented in many of these assemblages. Both hard hammer and soft hammer techniques were in use to help the toolmakers achieve their desired shapes. Besides stone, the technology for making wooden spears that had its roots in the Acheulean continued into the Middle Palaeolithic, as seen at the site of Lehringen, Germany, where a spear with a fire-hardened tip has been found and connected to an elephant carcass. Bone points, although rare, are also found within this industry. Also, stone points have been found that have thinned bases, which might indicate that they could have been hafted onto spear shafts. A discovery of the oldest known tar-hafted stone tools in Europe also falls within the general timeframe that corresponds with this industry, and along with the stone points mentioned above helps argue the case for the Middle Palaeolithic development of composite tools. The use of tar as an adhesive for hafting arrowheads and the like is otherwise known from several European Mesolithic and Neolithic sites – so not until a much later point in time. All of the above hint at the fact that these Middle Palaeolithic humans may have been quite advanced. It has been argued that the steps and the forethought needed to successfully use the prepared core technique, for instance, would have demanded a considerable amount of skill from the maker. The beginning of hafting would seem to strengthen this notion. It is, however, hard to say whether this advance would have been mostly limited to the technological sphere, or whether it can be taken to mean a more general advance in human capabilities, such as with regard to social and environmental intelligence. What is clear, though, is that humans spread across the globe into ever more challenging environments; most of the zones of Africa and Eurasia were conquered, ranging from tropical and temperate to periglacial climates, with the exception of harsh deserts, the denser of the tropical forests, and the very northernmost or arctic tundras. Late in this period (which overlaps with the Late Palaeolithic), humans even reached faraway Australia by around 40,000 years ago, which was connected to Papua New Guinea by the grace of lower sea levels at that time. Hominins that match the timeframe of this industry are archaic homo sapiens, including Neanderthals, and anatomically modern humans (Homo sapiens sapiens). Late- or Upper Palaeolithic There are areas in which the Middle Palaeolithic was retained for some time still, while others had since adopted the characteristics that push them into the Late Palaeolithic (c. 50,000/40,000 – c. 10,000 years ago), demonstrating a good example of the typical dating muddle that results from this technological way of classification. This industry recedes together with the ice sheets of the last glaciation or Ice Age, after which the climate warmed up. It is best known from sites occupied by anatomically modern humans, and is generally associated with them, but some of it also falls within the timeframe of the last populations of Neanderthals, who disappeared from the fossil record by approximately 30,000 years ago. The Late Palaeolithic saw a huge proliferation occur. Blade tools made of stone were created, but the emphasis shifted away from stone to artefacts made from materials such as bone, antler and ivory. Needles and points were made out of this non-lithic stuff, which lent itself excellently to these fine shapes, and their presence indicates that sewed clothes must have been the norm from 20,000 years ago onward. Even such technological feats as spear throwers, shaft straighteners, harpoons, and bows and arrows began to appear. A spear thrower is basically a long shaft with a hook on its end to which an arrow could be fitted, which would increase both the distance and the speed of the projectile hurtled by the capable hands of a keen-eyed hunter. Some of these were magnificently decorated with carvings, or were even carved into the actual shapes of animals; the Magdalenian culture of western Europe provides some stunning examples of this. Towards the end of the Late Palaeolithic, arrows (and thus, by implication, bows) were in use, as they have been found at a site in Stellmoor, Germany, and are implied by the small size of many of the points that occur in this industry. These mechanical devices represent a great leap in the advance of hunting technologies and weaponry. The blade technologies are typical of the stone side of the industry, and show elongated flakes being produced by soft hammer or indirect percussion: a percussor struck a punch that was placed on the edge of a blade core. The resulting blades could be made into a whole array of tool forms such as backed knives, burins and end scrapers. The diversity of the Late Palaeolithic technologies meant that some of them, such as the Solutrean of Spain and France and the Clovis and Folsom ones of the New World, had their focus on bifacial points that may have been produced by soft hammer technique of by pressure flaking. Other technologies, such as African and some central- and eastern Asian ones, emphasised small blades known as bladelets and geometric microliths (small flint blades or fractions of blades) that were turned into composite tools and projectiles through hafting. Falling within the timeframe of both the Middle- and Late Palaeolithic, modern humans managed to reach Australia by about 40,000 years ago. However, it was not until relatively late into the Late Palaeolithic that we see the first evidence of humans making it across the Bering Strait and into the Americas, where they arrived by at least 15,000 years ago. The best visible culture there is the Clovis culture (c. 13,500 – c. 13,000 years ago), which is famous for its fluted spear points and is often connected with the remains of mammoths. Humans had by now conquered all feasible continents (Antarctica is not what one would consider within any realistic criteria) and climates ranging all the way from tropical to desert and stone-cold arctic climates, using this new range of tools to effectively exploit their environment and help them adapt to all of these different temperatures. The way humans adapted to new terrains and a wider range of climates throughout the Late Palaeolithic is a good precursor to the kind of adaptability that was required when the last glaciation or Ice Age ended round about 12,000 years ago. The climate warmed up, causing sea levels to rise, flooding low-lying coastal areas and creating, for instance, the English Channel, and more dense woodlands began to appear. Importantly, many giant prehistoric mammals such as woolly mammoths gradually went extinct, probably pushed by the climate and perhaps also by human hunters, impacting the sort of food sources that were available to contemporary hunter-gatherers. The Mesolithic, spanning from the end of the Ice Age to the transition to agriculture (which happened at different times in different regions), saw humans adapt to these changing environments. Whereas agriculture did not reach Northern Europe until around 4,000 BCE, in the Near East the Mesolithic barely began at all since it was the first place where the leap to farming was made around 9,000 BCE. The archetypal tool of the Mesolithic (although it also occurs outside of this industry) is the microlith – a small flint blade or fraction of a blade, often only around 5 mm long and 4 mm thick. Striking a small core could produce the desired results, as could a technique in which a larger blade was notched and then a small portion snapped off. A by-product of this are tiny waste chips known as microburins, which the technique was named after. Microliths could be used as weapon- or arrow tips, or multiple microliths could be hafted together to create cutting edges on tools. In the Early Mesolithic, these microliths seem to be highly standardised relative to the same sort of items from the Later Mesolithic, which may hold clues to the different ways these people could have hunted. Although the rich, imaginative decorations seen in the Late Palaeolithic are largely absent from the Mesolithic, these microliths show a development towards a very sophisticated and versatile composite tool type that was moreover a lot more efficient when it comes to the use of flint resources than previous industries had been. The huge percentage of arrowheads present in Mesolithic assemblages hints at a high probability that the meaty parts of the meals these hunter-gatherers ate had come to their unfortunate end at the hands of skilled bowmen. The sorts of prey these arrows could take down ranged from small animals like birds and fish to larger game such as onager and gazelle – who could be brought down with chisel-ended arrows. Barbs could also be fixed to arrows, which – as experiments have shown – proved very effective indeed at causing wide, gaping wounds once the arrow tip had entered its target. The bigger the wound was, the more damage internally, and the bigger the blood loss. However, despite these Mesolithic people’s weapons being very much capable of bringing down huge beasts, and because the number of huge beasts declined during this time, alternatives had to be found. Luckily, these hunter-gatherers successfully adapted to a more varied diet, using their arrows on many different animals, as well as developing sophisticated fishing gear, namely the first known nets and hooks. Mattocks and axes were even used to clear unwanted trees, and both canoes and skis have been found for this period. Bone adzes proved useful digging sticks for uprooting tubers, while awls could be used in both plant processing and for hide working. Scrapers, also used for defleshing, thinning and softening hides, were very popular in the Late Mesolithic, alongside similarly used bone and antler tools. Strikingly, it seems these people were able to get in touch with faraway societies in order to trade goods and tools, as seen in the spread of Mediterranean obsidian and in Polish chocolate-coloured flint. It must be emphasised that this age saw great regional variation. With the coming of agriculture, between around 9,000 BCE in the Near East and up to around 4,000 before it had spread all the way to Northern Europe, the lifestyles of the societies in question obviously changed drastically. This is the only part of the Stone Age in which the societies in question are no longer hunter-gatherers. However, as implied by the way we choose to let this age end with the start of the use of bronze (the first use of which was in the Near East around 3,300 BCE), the Neolithic still saw stone tools being used. Despite this huge change to a more sedentary lifestyle, it is clear that some Mesolithic traditions carried over far into the Neolithic. Examples are bone and antler technologies and the use of projectile points. Harvesting knives and sickles have been found in both the Palaeolithic and the Mesolithic, as they had uses before farming, too, but they became popular in this new context. With regard to stone-working techniques such as grinding and drilling, which were not uncommon even in the later Palaeolithic, they now took on a whole new dimension and were applied much more fervently than before. The biggest effect on technology seems to stem from the economic requirements of supporting a larger population (than the hunter-gatherer bands), like in villages. Such a fully sedentary lifestyle would have required less of a need for tools to be light and easy so they could be lugged across the terrain (it has been argued that there is a contrast between even the most sedentary hunter-gatherers and sedentary agriculturalists). A good example of a piece of equipment that would have been slightly impractical to carry by manpower only is the loom, which is almost exclusively known from agriculturalists, and which facilitated textile production. It is conceivable that tools used within textile production were among the first ones that appeared in the early Neolithic. A Neolithic site in Syria shows implements such as drills and reamers that may have been used for the joinery of wood – or joining pieces of wood together by using pegs and the likes. If this all seems rather peaceful so far, do not be alarmed. Humans would not be humans if they did not also show a glimpse of a violent side. Axes are very visibly present in the Neolithic archaeological record; whole hoards of flint axes are known. However, other materials than flint were also used. These tools fall within the category of ground stone tools, were carefully polished, and could be hafted onto wooden handles. Rather than imagining nothing but rampaging hordes of axe warriors, however, a lot of them would have been work axes, used to fell trees rather than neighbouring people. Sadly, as time went on and people transitioned through bronze and iron ages, from prehistory into history, all the way to today, the use (and killing potential) of weapons only seems to keep growing exponentially. I for my part prefer the old stones and other Stone Age tools.
We see that aggression seems to be a way of living, a way of moving forward, a way of success, and a source of pride in children. This, in a very subtle way, encourages violence. When aggression and violence are promoted in society, human values diminish. We need to counteract. A sense of shame has to be connected with anger and violence. We need to promote human values, especially love, compassion and a sense of belonging, loud and clear. Basically in the teenage years there is considerable emotional turmoil and turbulence happening within. If teenagers do not find a way to express these emotions, they are stuck with them. These emotions ferment and become violence. Emotions are more in control of our life than our intellect, our thoughts, our concepts and ideas. The important way to calm the negative emotions is by utilizing the breath to calm down emotions. Creating a sense of belonging is another method to control emotions. Increasing self esteem, smiling through criticism, developing a sense humor, engaging in physical activity, and eating Sattvic food are methods to control emotions. You can read the complete article on How to Tackle Negative Emotions here.
- Education and Science Ideas for Teaching ESL Students in Mainstreamed Classes What is ESL Education? ESL means English as a Second Language. This refers to people for whom English is not their mother tongue, and who are living in a society where English is the language of the mainstream community around them. They may be immigrants, children of immigrants whose families speak their native language at home and international students who come to English-speaking countries to polish their business English or complete their education. The term is sometimes misleading, for in many cases, it's not a second language, but a third or fourth language, and that is why ESL may also be referred to as ESAL, English as a Second or Additional Language. In English-speaking countries like USA, UK, Australia, and Canada, with high rates of immigration, large numbers of ESL students in classrooms are having a significant impact on the public school system. In British Columbia, Canada, for example, in 2011, 57,991 B.C. students, or 10 per cent of the entire student population, were ESL students, according to a 2006 article from the Vancouver Sun Newspaper. School districts receive an extra $1,100 from the B.C. Ministry of Education for each ESL student enrolled in their districts. As domestic birth rates remain stable or decline, most of Canada's population increase comes from immigration. ESL students around the world come from varied backgrounds. Some, especially Mandarin speakers from Taiwan and China, come from cultures with a tradition of valuing education and schooling. In some cases the families are affluent and the home life and family expectations create structure and support for the students to do well at school. Others are refugees and may be traumatized by war. Some have lost their families; some are boat people; some are orphans. Some have never learned to read and write in their own culture and language. These students need a lot of help in the class, and the teacher cannot always give them the help they really need, while still teaching the other 30-odd students in the class. Three Largest Cities in Canada Does your family speak a language other than English at home? Does this picture sound like what is happening is large urban centers in your country? In Vancouver, the largest city in western Canada, with a population of over 2 million, the Vancouver School Board website reports that: - 25% of K-Grade 12 students are designated ESL - 60% speak a language other than English at home - 126 languages have been identified in Vancouver schools If you answered "yes" to the question above, which continent did your family come from? In his new book English-only Instruction and Immigrant Studies in Secondary Schools: A Critical Examination (2011), Dr. Lee Gunderson of the Language and Literacy Education Department of the University of British Columbia reports his findings from a longitudinal study in Vancouver high schools between 2001 and 2006. This research showed that 65% of ESL students registered in provincially examinable academic courses in grade 8 were no longer registered by Grade 12. Although some of them took high school graduation credits in adult education programs later, Dr. Gunderson’s study concludes that 40% of these students drop out and never graduate from high school. This is a significant social problem that affects all of us, for without high school graduation, these people are more likely to work at low-paying jobs, pay fewer taxes and have less free time and resources to support their own children’s development and aspirations. How Can Teachers Help ESL Students in Classrooms? Here are some classroom management practices that can help ESL students cope in mainstreamed classes. Some are much larger tasks than one teacher can achieve alone, and require coordinating school-wide or community involvement. 1. Prepare the environment as a learning center where students can work at their own level in a structured ladder of tasks. As much as possible, post class material and assignments online, in Moodle or Google documents, for example, so students can access and practice as much as they can at their own pace. 2. Use pair work and small mixed-culture group activities. Students are more comfortable interacting in small groups where they don't feel so conspicuous. They ask questions, share ideas, solve problems and teach each other. Assign a mix of languages to each group, so the functional language has to be English, and the students can informally find out about each other's world views, life experience, and cultural background. 3. Depending on the age and level of your class, assign Show and Tell projects often and regularly. Have students practice presenting in pairs or to small groups, or both, before presenting in front of whole class. 4. Assign students to research a topic or teach a skill about what they know or where they are from, or what they know about their family tree or ancestor history. 5. Try some of these tips to strengthen your students' reading skills: - Use Reading Buddies--native speakers listen to ESL, or sometimes to each other while teacher works with ESL groups and native speaker groups at various levels on a rotating basis. - Assign novel study projects, with ESL students reading Penguin Simplified Readers at appropriate levels, while native speakers read grade-appropriate selected novels. I have written more about this here. - Establish an Extensive Reading Project for all students to read extra curricular books from a selected list, and respond in some way--summary, book report, paragraph, creative writing response, visual arts response, or dramatization. - Introduce a Silent Sustained Reading period, also known as Drop Everything and Read (DEAR), when for a period of 15 t0 30 minutes, depending on age and level, the whole class, including the teacher and any present parent volunteers, read silently in whatever text they choose. 6. Include parent volunteers in the classroom. Plan specific activities for them to do so they feel useful. 7. Start a classroom blog or website where everyone contributes written, photographic and video content. Sometimes students may be weak in language skills but strong in practical skills that are valuable sources of peer teaching. Here is an example of Grasslands Press, which one of my colleagues recently started with his advanced academic writing class. 8. Use information Gap Activities. As an example of this, half the class (native speakers) listen to a short video, then they pre-teach it to the ESL students, telling the ESL students what it's about, drawing sketches, drafting outlines, and introducing vocabulary. Then the whole class listens to the target video. Have it available in the learning center for all students to review and practice as required. Assign writing a summary, drawing a response, or preparing for a content test according to level of students and the subject of study. 9. Prepare a Class Performance--drama, choral reading, opera. According to your time line and the class's skills, students either write it themselves or prepare it from a story they know and like. 10. Read stories or a serial novel aloud to class, and assign response activities and small group discussion tasks. 11. Set up a homework club, learning center, or writing center after school where advanced students, older students, or community volunteers can get work experience helping ESL students complete homework. Many ESL students come from family backgrounds where there may not be a quiet study area or time at home. Many have jobs after school and on weekends, doing their best to support themselves or help their family pay bills. Many have no English-speaking adults at home who can give them guidance or structure if they need help with school work. How Long Does It Take to Learn English as a Second Language? - Students vary, and so do their backgrounds. - Students who arrive in childhood and start learning English while young learn faster. - Social students who take risks and talk to native speakers rather than staying mostly with their own language group learn faster. - Students from families who value literacy and support school, homework, school rules and authority learn faster. - Students who come from agricultural backgrounds, who are traumatized by war, who have lost parents and relatives, who have spent years in refugee or relocation camps, or who have spent little or no time in schools in their own country take longer. Taking into account the factors above, usually outgoing students can speak and understand social English after about three years. However, academic English has more complex requirements for vocabulary, reading, writing and cultural competency, and takes 7 years or longer to approach mastery. I have observed from my own experience as an ESL and literacy teacher for nearly thirty years that even after 7 years of structured, academic study, there is usually evidence in an ESL writer's work that the author is not a native speaker. What Can Students Do to Learn English as a Second Language? High school and post-secondary students who are serious about completing their education in North American and graduating with competitive grades from high school and universities need to do whatever they can to help themselves and accelerate their language acquisition. 1. Speak, listen, read and write English everyday. Use the free public libraries, read newspapers, listen to TV and movies in English, talk to native speakers of English in the community. There are many free sites on the internet to practice these skills, and I have listed some of them here. 2. Mix with the community outside the comfort zone of your own cultural group. Get to know people, get to understand the culture, the history, the literature, the popular culture as well as the language. 3. Recognize the length of time it takes to learn academic English, and keep working at it. 4. Understand that in North American educational culture, much of the learning is self-practice and comes through self-study, research, doing projects and completing homework. Your work is not done for the day when the last class ends. Put in the time, and be patient when it takes you longer than it may take domestic classmates. 5. Hire a tutor who can help you with high level reading and writing, but do the work yourself. Teachers of a class of 30 students all with different needs cannot help each student individually the way a tutor can help you. Take responsibility for your own learning. 6. Keep a reading log, where you take notes of articles and books you have read, and review it. 7. Keep a vocabulary journal and add several new words to it every day. Keep reading, and learn new words from your reading. The English vocabulary is huge, and it keeps growing. How Do Political Choices Affect ESL in Eduaction? Given the impact large numbers of ESL students are having on the education systems of USA, Canada, UK and Australia, the task of teaching ESL students in mainstreamed classes effectively is huge. It requires coordinated effort and resources from within and from outside the schools. What do the school boards and governments in your district need to do? The British Columbia Teachers Federation recognizes that teachers alone cannot solve the challenges these students face. Here are four recommendations from the Federation: 1. Dismiss the funding cap. Currently ESL students are allowed to study ESL for 5 years before language support is no longer funded for them. Target the funding, so it is used for ESL programs and not spent in the general budget. 2. Simplify the paperwork and auditing so ESL teachers can spend more time preparing classes and working with students rather than meeting complex requirements of bureaucracy. 3. Reduce caseloads so teachers have fewer ESL students in their mainstream classes. When as many as one-third of the class are not native speakers of English, it is difficult to teach the curriculum to the standard domestic students need, because so many of the class fall further and further behind due to insufficient language skills. 4. Allow second language academic credit for ESL learners, as domestic students get credit for beginner German, Spanish, French, Japanese or other languages for which curriculum has been determined. As it stands, many ESL students face pressure from their families and from themselves to "get out of ESL" and into academic mainstream classes even if their academic language skills make success unlikely. How Can We Address the Challenges of Teaching ESL? In many English-speaking countries, birth rates are stable or declining, and population growth is fuelled by immigration. Immigrants bring families whose children join the public schools. In addition, many international students join high schools and universities to learn English and earn academic credits and degrees that will help build their careers in their home countries. They all have a backstory. They bring with them a wealth of personal history and cultural information that has the potential of enriching the host countries' societies, schools and citizens. As much as they are students of English, they are teachers of the multi-cultural richness of wisdom and values that can become a foundation for civic dialogue in a global village.
Drone Music is made from long, sustained tones which are added to or subtracted from over time. Listen to these sounds: Listen beyond the drums, can you hear the constant low tone of the bagpipes? Can you hear the constant tone which is improvised around? Behind the fast notes is a constant tone. This is an Indian instrument. Just as with the bagpipes, try to listen beyond the high sounds and hear the constant underlying tone in the background. Can you spot the similarities between these sounds? What do they all share in common? Drones in Music Drones are found in different types of music from across the world. They are sustained or repeated tones, which ring out for extended periods of time. They can evolve slowly and create rich and complex textures. Drones are found in Aboriginal music, Indian music, Japanese Gagaku, and even Western music. Famous examples of drone instruments are the bagpipes and the didgeridoo. In Canada, they are celebrating Drone Music with a whole day dedicated to it! ‘Drone Day’. (Their website also contains a large selection of links to examples of Drone Music). Function in Music Drones often serve as a strong foundation upon which other sounds can be built and added. They provide a fixed point of reference against which other sounds can relate, without needing to enforce a defined tonal structure. This gives musicians and composers flexibility and freedom to improvise and play along over the drone. Drones and Electronic Music Electronic instruments and oscillators are drone making machines. The pitched tones that they create are perfect for making Drone Music. As synthesisers and drone making electronics became more available in the 1970s, a whole wave of composers began to create Drone Music from electronic sounds. by Pauline Oliveros We know that drones can be created using certain instruments like oscillators and electronic instruments, but drones can also be created by editing sound files. The more we stretch the sound, the more ‘drone-like’ it becomes. By stretching different, short sounds, we can create an array of drone textures that can be combined to make a drone piece. Filtering, delay and reverberation can also be used to sculpt and modify the qualities of these sounds, blending them together. Use the following transformations to make and edit drones: - Time-StretchingA manipulation in which the Duration of a sound is altered. Time stretching can be used to make sounds longer or shorter.Time-stretching - FilterA filter changes the frequency makeup of a sound by making parts of it weaker. Filters allow you to focus on parts of a sound that are of interest to you, or to take away parts that you don't like.Filter - DelayA process in which an input signal is looped and repeated.Delay - ReverbThe multiple short reflections of sound that give humans an immediate impression of space. Reverb effects can be used to impart a sense of space onto recorded or generated sounds.Reverb
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
When dental emergencies and pain occur, our attention is often focused on diseases and injuries related to the teeth. However, it's important to remember that the soft tissues of the mouth — the gums, tongue, lips and cheek lining — may also be affected. While they are tough enough to stand up to the oral environment, these tissues can be damaged by accidental bites, falls, sports injuries, and scalding liquids. They may also suffer injury from foreign bodies that become lodged below the gum line, and they can develop painful and potentially serious abscesses. First Aid for Soft Tissues Soft tissue injuries in the mouth don't usually bleed excessively — although blood mixing with saliva may make any bleeding appear worse than it actually is. To assist someone with this type of injury, you should first try to rinse the mouth with a dilute salt water solution. If a wound is visible, it can be cleaned with mild soap and water; if that isn't possible, try to remove any foreign material by hand, and rinse again. Bleeding can usually be controlled by pressing damp gauze (or, if unavailable, another clean material) directly to the site of the injury, and keeping it there for 10-15 minutes. If the bleeding doesn't stop, immediate medical attention will be needed. Try to see a dentist within 6 hours of the injury for evaluation and treatment. This usually involves determining the extent of the damage, performing initial restorative procedures, and occasionally suturing (stitching) the wound. An antibiotic and/or tetanus shot may also be given. Occasionally, foreign objects may become lodged in the space between teeth and gums, causing irritation and the potential for infection. There are a few foods (such as popcorn husks) that seem especially prone to doing this, but other items placed in the mouth — like wood splinters from toothpicks or bits of fingernail, for example — can cause this problem as well. If you feel something stuck under the gum, you can try using dental floss to remove it: Gently work the floss up and down below the gum line to try and dislodge the object. Light pressure from a toothpick may also help work it free — but avoid pressing too hard or pushing the object in deeper. If that doesn't work, see a dentist as soon as possible. Special tools may be needed to find and remove the object, and you may be given medication to prevent infection. Periodontal (Gum) Abscesses Sometimes called a gum boil, a periodontal abscess is a pus-filled sac that may form between teeth and gums. It is caused by an infection, which may have come from food or other objects trapped beneath the gum line, or from uncontrolled periodontal disease. Because pressure builds up quickly inside them, abscesses are generally quite painful. Symptoms may include a throbbing toothache which comes on suddenly, tenderness and swelling of the gums or face, and sometimes fever. Occasionally, pus draining into the mouth through an opening in the sac relieves the pressure and pain, but may cause a strange taste. If left untreated, abscesses can persist for months and cause serious health problems, including infections that spread to other parts of the body. That's why it is important to see a dentist right away if you experience symptoms. He or she will find the location of the abscess and treat it appropriately. Treatment usually involves draining the pus and fluid, thoroughly cleaning the affected area, and controlling the infection. The Field-Side Guide to Dental Injuries Accidents to the teeth, jaws and mouth can happen at any time during any sporting activity. Proper attention can save pain, alleviate anxiety and costly dental treatment. A little knowledge, as they say, can go along way. This field-side guide briefly explains some simple rules to follow when dealing with different dental injuries and when you need to see the dentist... Read Article
KUTZTOWN UNIVERSITY ELEMENTARY EDUCATION DEPARTMENT PROFESSIONAL SEMESTER PROGRAM LESSON PLAN FORMAT Teacher Candidate: Paige Halligan Cooperating Teacher: Claire Kempes Group Size: 21 Students Allotted Time: 45 minutes Subject or Topic: Social Studies – Harriet Tubman and Freedom Quilt Date: 02/24/15 Coop. Initials: ________________ Grade Level: 1st Section: EEU 390-‐045 STANDARD: (PA Common Core): 5.1.1.C: Define equality and the need to treat everyone equally. 8.3.1.A: Identify Americans who played a significant role in American history. CC.2.3.1.A.1: Compose and distinguish between two-‐ and three-‐dimensional shapes based on their attributes. 9.1.3.A: Know and use the elements and principles of each art form to create works in the arts and humanities. I. Performance Objectives (Learning Outcomes) Students will construct a patch map for the class quilt that encodes a message or direction to reflect their understanding of hidden messages of the Underground Railroad. II. Instructional Materials o Overhead projector o Transparencies § Harriet Tubman Facts transparency § Quilt code transparency o The Patchwork Path: A Quilt Map to Freedom by Bettye Stroud o Quilt Squares (one per student; choice made by student) • Bear Paws (15) • Crossroads (15) • Monkey Wrench o Construction paper of a variety of colors o Desk protectors (one per student) o Toolkits (one per student) § Scissors § Color pencils § Crayons § Glue § Eraser o Harriet Tubman facts exit slips (one per student) III. Subject Matter/ Content (prerequisite skills, key vocabulary, big idea) Prerequisite Skills • Engage in active listening during a read-‐aloud • Ability to empathize with others • Fine motor abilities to color, trace, and cut art materials • Collaborate and discuss within small and large group instruction • Basic knowledge and awareness about slavery Key Vocabulary • Underground Railroad – A group of people who helped slaves escape to the North away from slavery in the South. • Safe House – Homes and business that housed escaping slaves who were headed North. • Quilt Code – The idea that African American slaves used quilts to communicate information about how to escape to freedom. Big Idea • Conductors like Harriet Tubman and everyday objects like quilts had ways of helping enslaved people head North to freedom during the time of slavery. IV. Implementation A. Introduction – 1. The teacher will begin the lesson by asking if any students have every heard of the Underground Railroad before. 2. After a few students’ responses, the teacher will confirm with the students that the Underground Railroad was group of people who helped slaves escape to the North away from slavery in the South. The teacher will further explain that one of the conductors within the Underground Railroad was a woman by the name of Harriet Tubman. 3. The teacher will project a few facts about Harriet Tubman up on the overhead projector and ask students prior to showing them if they are “true” or “false”. The teacher will take a vote of students’ responses prior to projecting the answers. 4. After and introduction about Harriet Tubman and how she dedicated her life to working the Underground Railroad, students will be introduced to the book The Patchwork Path: A Quilt Map to Freedom and be lead to the back around for a read-‐aloud. B. Development – 1. As the teacher reads the book, the teacher will have guided questions throughout the reading. 2. Upon completion of the book, the teacher will inform students that they will be creating their very own patches with hidden messages on them to create a class “Freedom Quilt”. The teacher will instruct students to return to their desks for further instruction. 3. The teacher will explain to the class that they will be making a class Freedom Quilt by each of them independently coloring and decoding the patches. Each table group will be called on to go to the back table where there will be three different patches to choose from (Bear Paw, Crossroads, Monkey’s Wrench) when a definition of what they stood for will be from the book on a large piece of paper. Once students choose their patch, they will return to their desks, take out their desk protectors and tool kits and color in the patterns on their patches. Upon completion of the coloring of the patch, they will report to the teacher for a backing (yellow or blue) piece for their patch. On that larger backing piece, students will write the name of their coded patch, the secret code and meaning of the code, and their name on the back. 4. The teacher will allow 15 minutes of working time for students to create. 5. After completion of the paper patch, the students will show other students who are complete their quilts and see if the other students can figure out their hidden code on their patch. 6. After 15 minutes of working time, the students will be asked to put away their tool kits. 7. The teacher will distribute Harriet Tubman facts exit slips and instruct students to list at least two to three facts about Harriet Tubman with their names at the top. C. Closure – 1. The teacher will ask students how they might use an everyday object or action to give secret messages to others who are in trouble or need help. 2. As students respond, the teacher will collect the exit slips and finished quilt patches. For the students who are still completing their patches, they can finish their work during a free time period, during in-‐door recess, or at home. D. Accommodations / Differentiation -‐ v For students with visual impairments, an enlarged copy of all the transparencies will be provided for the student to see and the teacher will have it previously filled out with intended traits and evidence prior to teaching the lesson. Also, preferential seating close to the screen will ensure optimal visual ability for the student throughout the lesson. v For students that have difficulty focusing during lessons, a guided sticky note with questions on it will be provided to ensure focus of an end result after listening to the story. v For students with fine motor difficulties, tracing patterns will be provided as a basis for students color in the already spaced out shapes and be asked to identify the shapes prior to coloring. E. Assessment/Evaluation plan 1. Formative Freedom Quilt Patches – The teacher will evaluate the completeness and accuracy of a hidden code to determine whether or not the student understood the purpose of quilt codes that were used in the Underground Railroad. If their patch contained coloring of the pattern on the patch, the name of the patch, and meaning of the code, student met the objective of this formative assessment. Exit Slips – The teacher will evaluate whether or not students could recall facts about Harriet Tubman that were taught in today’s lesson by seeing what their written factual responses were on the exit slips. V. Reflective Response A. Report of Students’ Performance in Terms of States Objectives All students met the objective of this lesson and grasp the concept of hidden messages in the quilt patches during the time of slavery of Black Americans. B. Personal Reflection Will this lessons concept of hidden messages be developmentally appropriate or too difficult? I taught this lesson through considering the delivery of instruction (through the facts sheet, Read-‐ Aloud, and assessment). Students at this age level can only learn and perform through certain ways. Originally, I was going to have students draw out and create their own patches with their own hidden messages. But, after consideration of what students can do at this point in first grade, I realized that would not have been developmentally appropriate. At this grade, students still are developing their fine motor skills that holding a ruler and drawing lines and shapes would be too difficult of a task and they would not have met the objective in one day. Also, thinking up a hidden message and having a corresponding picture would be a great idea but maybe for an older grade level. The idea of that is too abstract for most students at this grade level. Providing them with the information and tasks the way I did was much more achievable. This lesson was a great way to incorporate Mathematics and Art within Social Studies. Students really enjoyed learning about how the Black Americans, during this time of slavery, had a secret code that gave them a way to communicate with other slaves who knew the code. The students thought that Harriet Tubman was like a “super hero” because she saved so many people and had many jobs in her lifetime. Students really grasped the concept behind the purpose of the codes in the patches and understood that slaves had to escape to the North in order to be recognized as free citizens. This integration of art also allowed time for students who have difficulty with self-‐expressing their thoughts and ideas in written language an opportunity to show what they know through art and I allowed these students to verbally tell me the meaning of the patch prior to them writing the meaning on the back of the patch. The frustration level was almost none existent for these students and two of them said that learning about the Underground Railroad was their favorite so far (but they still “really like” the Ruby Bridges and Rosa Parks lessons). VI. Resources Stroud, B., & Bennett, E. (2005). The patchwork path: A quilt map to freedom. Cambridge, Mass.: Candlewick Press. St. James United Church. Freedom Quilts. (2009). Retrieved from http://www.stjamesunitedchurchmontreal.com/freedomquilts.php
Shift Students’ Roles from Passive Observers to Active Participants. Preparing students for a world that did not exist when they were students themselves can be challenging for many teachers. Engaging students, particularly disinterested ones, in the learning process is no easy task, especially when easy access to information is at an all-time high. How then do educators simultaneously ensure knowledge acquisition and engagement? Ron Nash encourages teachers to embrace an interactive classroom by rethinking their role as information givers. The Interactive Classroom provides a framework for how to influence the learning process and increase student participation by sharing • Proven strategies for improving presentation and facilitation skills • Kinesthetic, interpersonal, and classroom management methods • Brain-based teaching strategies that promote active learning • Project-based learning and formative assessment techniques that promote a robust learning environment Intended to cultivate an interactive classroom in which students take an active role in learning, this book provides a blueprint for educators seeking to amplify student engagement while imparting critical twenty-first century skills.
Oxidative stress occurs when the production of reactive oxygen is greater than the body's ability to detoxify the reactive intermediates. This imbalance leads to oxidative damage to proteins, molecules, and genes within the body. Since the body is incapable of keeping up with the detoxification of the free radicals, the damage continues to spread. What Is Oxidative Stress? Free radicals occur naturally within the body, and for the most part, the body's natural antioxidants can manage their detoxification. But, there are certain external factors that can trigger the production of these damaging free radicals. These factors include: External Factors of Oxidative Stress • Excessive exposure to UV rays • Eating an unhealthy diet • Excessive exercise • Certain medications and/or treatments How Do Antioxidants Counteract Oxidative Stress and Free Radicals? The body naturally produces antioxidants like superoxide dismutase, catalase, and an assortment of peroxidase enzymes, as a means of defending itself against free radicals. The antioxidants neutralize the free radicals, thereby rendering them harmless to other cells. Unfortunately, the antioxidants produced naturally by the body are not enough to neutralize all of the free radicals in the body. Therefore, a constant supply of external sources of antioxidants should be a part of one's daily diet, in order to reduce oxidative stress and related damage. Antioxidants have the remarkable ability to repair damaged molecules by donating hydrogen atoms to the molecules. Some antioxidants even have a chelating effect on free radical production that's catalyzed by heavy metals. In this situation, the antioxidant contains the heavy metal molecules so strongly that the chemical reaction necessary to create a free radical never occurs. When the chelating antioxidant is water-soluble, it also causes the removal of the heavy metals from the body via the urine. Flavenoid antioxidants actually attach themselves to one's DNA, forming a barrier of protection against free radical attacks, while some antioxidants even have the ability to cause some types of cancer cells to self-destruct in a process called apoptosis. Which Antioxidant Works Best Against Oxidative Stress?Astaxanthin is considered nature's most powerful antioxidant. It has an especially high propensity for absorbing the excess energy from singlet oxygen, releasing it as heat, and returning the oxygen (and itself) back to its original state. This process is known as “quenching.” Natural sources of astaxanthin are numerous, but nearly all sources of the pigment are found to have very low concentrations. The red-colored algae, Haematococcus pluvialis, however, provides the most concentrated natural source of astaxanthin known, from 10,000-40,000 ppm (mg/kg). This algae also offers a rich array of other important carotenoids such as beta-carotene, lutein, and canthaxanthin. Therefore, astaxanthin is one of the most complete antioxidant sources available, and one of the most effective against oxidative stress and free radicals. The information provided is for educational purposes only and does not constitute medical advice. Always seek the advice of your physician or qualified healthcare provider with any questions or concerns about your health. Check with your doctor before beginning any exercise program. Never disregard or delay seeking medical advice because of something you have heard or read in this article or the internet.
The military situation during the Cold War period Sweden was not occupied by German or other foreign forces during the Second World War. This was probably one of the main reasons for its exceptional prosperity after the war, backed up by American economic support (the Marshall Plan). The Swedish military situation during the Cold War period was characterized by: - Political neutrality. - A strategic location in the front line with the Soviet Union. - A very strong coastal defence force and a strong air force. This contributed to a deterrent effect, an official Swedish defence policy which aimed at preventing a Soviet attack. - The country was prepared for total war in case of an attack from the Soviet Union. - Sweden was self-sufficient in most of its military equipment such as cannons and aircrafts. Neutrality, but near contacts to United States and NATO It would have been impossible for Sweden to be neutral in the case of a war between NATO and the Warsaw Pact. In fact, the Swedish military forces were directed openly against the Soviet Union. The only realistic and successful possibility of defending Sweden in case of war was on the basis of support from the United States (and hence also nuclear support). Military ties with NATO were therefore kept deadly secret by the Swedish government because of its official policy of neutrality. Sweden’s strategic location Most Soviet attack plans throughout the Cold War period involved the Nordic countries because of the Soviet desire for military control of and access to the North Atlantic. Another reason for the Soviet Union’s interest in controlling access to the Baltic Sea was the many shipyards that could repair the Warsaw Pact’s ships in case of war. It should also be noted that half of the border between Western Europe and the Soviet countries is formed by Sweden. Control of this border led to many confrontations between Swedish and Soviet aircraft in the Baltic Sea, including the downing of a Swedish surveillance aircraft in 1952 (the DC-3 affair). Other confrontations involved Soviet submarines. One of these submarines, the U137, ran ashore in 1981 inside the restricted zone of the Karlskrona naval base, and resulted in a political crisis between Sweden and the Soviet Union. A strong coastal defence and a strong air force The eastern coast of Sweden, along a length of more than 1500 kilometres, probably had the most powerful coastal defence system in the world. The system consisted of coastal artillery, submarines, battleships and aircrafts. No less than 90 heavy cannons (typically 7.5 cm cannons) with large underground facilities were strategically located along the coast, together with a large number of bunkers and pillboxes. For a long time Sweden had the fourth largest air force in the world, with no less than 30 bases and a large number of smaller hangars mainly connected to motorways that could be used as runways in case of war. One of the main tasks of the Swedish air force was to hinder attacks from Soviet antisubmarine flights against NATO submarines with nuclear missiles in the Baltic Sea.
Good morning everyone! Happy Friday! We hope you have enjoyed learning about plants this week and creating your own dream garden yesterday. To finish off the week you will need your persuasive skills! We would like to see how you are getting on, so remember you can send us your work on our class email addresses below: Miss Harcourt: [email protected] Mrs Walters: [email protected] From Miss Harcourt and Mrs Walters 😊 English: Email - Persuasive Writing Audience: Your boss Format: An email Topic: Describe why you need a holiday. Explain all the work you do. Scientific Vocabulary you could include: Stem, water, minerals (nutrients), sugar (glucose), transport, upright. First, watch the clip to remind you about stems: https://www.bbc.co.uk/bitesize/topics/zy66fg8/articles/zcxh4qt. Also read back over your plant parts work from earlier in the week. Your task: Imagine you are the stem of a plant. Write an email to your boss describing why you need a holiday. Explain clearly all the work you do using scientific vocabulary. Watch the clip to recap persuasive techniques: https://www.bbc.co.uk/teach/class-clips-video/english-ks1-ks2-how-to-write-a-persuasive-text/zkcfbdm Remember you are writing to persuade your boss to give you time off work so try to: Use the persuasive sentence starter sheet to help you (see attached/below). This week you are going to use your knowledge of the four operations to consider specific properties of numbers. Today you are going to investigate what effect brackets can have on a calculation. Topic: Adult Supervision Compulsory Botanical hammer printing! You must get permission from parents/carers to complete this task and you need adult supervision throughout. Instead of a hammer you can try the back of a metal spoon. Botanical hammer printing is a fun way to capture the colours of a season, it’s super easy, oh and did we mention noisy! It scores 10/10 with kids! You will need: To begin, arrange the petals and leaves onto a piece of paper/cloth. If you want to turn the finished print into a greetings card simply fold the paper in half first and then lay the leaves on the front. Now for the noisy bit, gently lay a piece of fabric e.g. kitchen towel/old tea towel (slightly bigger than the paper) over the top of the leaves. Slowly but firmly, hammer the surface until the specimens begin to release their stain. Try to keep the paper underneath steady but be sure to keep little fingers well clear of the hammer. Adults must supervise this part or if they think appropriate complete this part or if parents/carers are happy they could just offer an extra helping hand to safely guide the hammer. Reading: Please do your daily reading and record it in your reading record. You can now also go on: https://www.myon.co.uk/login/index.html to do your online reading and quizzes too! We can see how many books you have read and how you are doing on your quizzes. So go on, let’s get reading! Good luck with your spelling test today! If you get any wrong, keep practising them! Year 3 / 4 Year 5 / 6
According to this article by Jo Boaler — professor of mathematics education at Stanford and co-founder of www.youcubed.org — math memorizers scored poorly on the international PISA test, and the U.S. has more memorizers than most other countries in the world. The highest achieving students internationally were those who thought of math as a set of connected, big ideas. Here’s what we see: 1. A visual approach to fractions gives students better number sense, and better access to word problems. When we require drawing, every problem becomes a word problem. In the problem below, all students recognized that 1/2 is 6 out of 12, visually. This is a “12-peak Toblerone”, so a total of 17 twelfths (by simply counting!) . Then this student imagined moving one 12th from the top row to make the 2nd row equal to one, leaving 5/12 on top. This shows number sense! Our students can do fraction addition and subtraction mentally. More importantly, visualization helps facilitate the transfer to word problems, as below. Egyptian fractions: We spent a few days answering word problems by building fractions with Cuisenaire rods. Here, for example, is a TWELVE-WIDE wall:One fourth — the light green rod — is called one fourth because four of them fit in a whole. The purple rod is called one third because 3 of them fit, the red is 1/6, etc. This student had no trouble finding a way to make 11/12 with Egyptian fractions: After long exposure to physical representations, word problems become easier. This problem, for example, would be difficult to do with algorithms. How about this problem: Erin and Kana went shopping for groceries. Each of them had an equal amount of money at first. Then Erin spent $80 and Kana spent $128. After that Kana had 4/7 of what Erin had left. How much money did Erin have left after shopping? Solve by drawing a fraction model. This is very difficult to do without algebra. Try it yourself before looking at the answer here. Once you see the solution, it’ll make sense, and all of this will transfer to stronger algebra students in 3 years. 2. A visual approach to math is the ONLY approach that works for some students. In the past, visual learners struggled with the algorithmic manner in which math was taught. (Challenge: randomly survey a couple dozen adults – we predict almost 1/3 of them will say they were ‘never very good at math’) However, in the past, there were good middle class jobs available to high school graduates – jobs that are now disappearing. It is our duty to make math accessible to ALL students. The good news is that requiring visualization of math also benefits the innately abstract math learners. Visualization skills helps students in Chemistry, Physics, Trigonometry, and other STEM subjects these students gravitate towards. Here’s an article about visualization in physics.
In this tutorial, we’ll study the meaning of the correlation coefficient in the correlation analysis. We’ll first start by discussing the idea of correlation in general between variables. This will help us understand why correlation analysis was developed in the first place. Then, we’ll learn the two main techniques for correlation analysis, Spearman and Pearson’s correlation. In relation to them, we’ll see both the mathematical formulation and their applications. Lastly, we’ll make a summary of what we can infer about a bivariate distribution on the basis of the values of its correlation coefficients. At the end of this tutorial, we’ll have an intuitive understanding of what the correlation coefficients represent. We’ll also be able to choose between the Spearman and Pearson correlation, in order to solve concrete tasks we’re facing. 2. Correlation in General 2.1. The Idea of Correlation Correlation analysis is a methodology for statistical analysis, dedicated to the study of the relationship between random variables. It’s also sometimes called dependence because it relates to the idea that the value of one variable might depend on another. We’re soon going to study the mathematical definition of correlation. But first, it’s better to get a general idea of what correlation means intuitively. We can do so with a small example. We know that, in general, the weight and height of an individual tend to go together. This means that the taller a person is, the higher their weight tends to be: This leads us to hypothesize that the relationship between weight and height might be characterized by dependence. In order to test this hypothesis, though, we need some kind of measure or index for the degree of dependence between two variables. That measure is what we call correlation. We might, in fact, imagine that the relationship of dependence can either be strong or weak or simply be completely absent: The index we need should let us distinguish at a glance between those cases, on the basis of the value that it possesses. 2.2. Correlation and Not Causation Correlation is an important tool in exploratory data analysis because it allows the preliminary identification of features that we suspect to not be linearly independent. It’s also important in the identification of causal relationships because there are known methodologies for testing causality that use correlation as to their core metric. There’s a common sentence among statisticians which says that correlation doesn’t imply causation. The idea behind this is that, for two variables that are correlated, a relationship of causality can’t be given for granted. The implication to which we refer here is the so-called material implication in propositional logic. We can, therefore, use the rules for working with implications in order to formalize its expression. If identifies correlation, and identifies causation, then this implication formally states that . We studied in our article on Boolean logic that we can rewrite an expression of the form as . This means that we can convert the expression to . If we use De Morgan’s law on the terms between brackets, we then obtain , which is true if there is a correlation but there isn’t causation. 2.3. When Does Correlation Not Imply Causation? There are some common logical mistakes that may lead us to think that there’s causation among two variables as a consequence of the observed correlation. These mistakes take the name of fallacies and frequently lead to the wrong understanding of what correlation actually represents. The first argument corresponds to the incorrect identification of the variable that causes the other. If and are variables about which we test their causal relationship, and if the true causal relationship has the form while simultaneously implying a high correlation , then will also be necessarily high. This means that we might incorrectly end up identifying as the cause and as the consequence of the causal relationship, and not the other way around. An example of this might be the following. We noted a strange behavior. Immediately upon putting food into a bowl, a cat comes, meows, and eats it: We might be tempted to infer that the filling of the bowl causes a cat to appear since the correlation between these events would be very high. In this case, though, we would be surprised if, not owning a cat, a cat fails to appear after we fill a bowl: In this example, we are incorrectly assigning antecedence in the relation of causality to the filling of the bowl. Instead, we should assign it to the presence of the cat, which causes us to fill the bowl in reaction to its presence. 2.4. A Third Factor A second argument relates to the incorrect inference of a relationship of causality between two events that are, both, consequences of a third unseen event . This corresponds, with formal notation, to the expression . The classic example for this argument corresponds to the relationship between the arrival of passengers and that of a train to a platform in a train station: If we knew nothing about timetables and their operation, we might end up thinking that the arrival of passengers causes the train to appear. In reality, though, a third factor, the scheduling of the passenger train, causes both the passengers and the train to appear at the appointed time. 2.5. Causation and Not Correlation A less frequently discussed question is whether the presence of a causal relationship between an independent and a dependent variable also implies correlation. With formal notation, we can express this question with the proposition . There certainly are causally-dependent variables that are also correlated to one another. In the sector of pharmacology, for example, the research on adverse drug reactions developed extensive methodologies for assessing causality between correlated variables. In the sector of pedagogy and education, similar methods also exist to assess causality between correlated variables, such as family income and student performances. As a general rule, however, causality doesn’t imply correlation. This is because correlation, especially Pearson’s as we’ll see shortly, measures only one type of functional relationship: linear relationships. While Spearman’s correlation fares slightly better, as we’ll see soon, it still fails to identify non-monotonic relationships between variables: This means that, as a general rule, we can infer correlation from causality only for linear or monotonic relationships. 3. Pearson’s Correlation Coefficient 3.1. Introduction to Pearson Correlation We can now get into the study of the two main techniques for calculating the correlation between variables: Pearson and Spearman’s correlations. Pearson’s correlation is the oldest method for calculating dependence between random variables, and it dates back to the end of the 19th century. It’s based upon the idea that a linear regression model may fit a bivariate distribution with varying degrees of accuracy. Pearson’s correlation thus provides a way to assess the fit of a linear regression model. It’s also invariant under scaling and translation. This means that Pearson’s correlation is particularly useful for studying the properties of hierarchical or fractal systems, which are scale-free by definition. 3.2. Mathematical Definition of Pearson’s Correlation We can define the Pearson’s correlation coefficient between two random variables and with components as the covariance of and , divided by the product of their respective standard deviations: In here, and indicate the averages of the two variables. The correlation coefficient assumes a value in the closed interval , where indicates maximum positive correlation, corresponds to lack of correlation, and denotes maximum negative correlation. In our article on linear regression, we also studied the relationship between this formula and the regression coefficient, which provides another way to compute the same correlation coefficient. 3.3. Possible Values We’re now going to study the possible values that the correlation coefficient can assume, and observe the shape of the distributions that are associated with each value. For , the two variables are uncorrelated. This means that the value assumed by one variable generally doesn’t influence the value assumed by the other: Uncorrelated bivariate distributions generally, but not necessarily, assume their typical “cloud” shape. If we spot this shape when plotting a dataset, we should immediately suspect that the distribution isn’t correlated. For , the variables are strongly positively correlated. Any bivariate distribution that can be fitted perfectly by a linear regression model with a positive slope always has a correlation coefficient of 1: Intuitively, if we see that the distribution assumes the shape of a line, we should now that the absolute value of the correlation is high. The sign of the slope, then, determines the sign of the correlation. For this reason, the correlation value of implies a linear distribution with negative slope: Most distributions don’t however have a perfect correlation value of 0, or 1. They do however tend to either a cloud or line-shaped functions, respectively, as the correlation approximates them. Here are some other examples of distributions with the respective correlation coefficients: 3.4. Interpretation of Pearson’s Correlation Coefficient There’s a typical mistake that many make in interpreting the Pearson correlation coefficient. This mistake consists of reading it as the slope of the linear regression model that best fits the distribution. The pictures above show that, for variables that span a line perfectly, the correlation coefficient is always 1 regardless of that line’s slope. This means that the correlation coefficient isn’t the slope of a line. It is, however, a good predictor of how well a linear regression model would fit the distribution. In the extreme case of , a linear regression model would fit perfectly the data with an error of 0. In the extreme case of , no linear regression model will fit the distribution well. 4. Spearman’s Rank Correlation 4.1. Introduction to Spearman A more refined measure for determining correlation is the so-called Spearman rank correlation. This correlation is normally indicated with the symbol and can assume any values in the interval , like Pearson’s. This correlation coefficient was developed to obviate a problem that Pearson’s correlation possesses. When considering distributions that are strongly monotonic, Pearson’s coefficient doesn’t necessarily correspond to : Spearman’s coefficient solves this problem, allowing to identify monotonicity in general, and not only in the specific case of linearity in the bivariate distribution. 4.2. Mathematical Definition of Pearson’s Correlation Unlike Pearson’s coefficient, is non-parametric and is calculated on ranks rather than on the variables themselves. The rank of a variable consists of the replacement of its values with the position that that particular value occupies in the sorted variable. If, for example, we want to calculate the rank of , we should first as . The rank is then computed by replacing each original value of the variable with its position in the sorted one, as . The Spearman correlation coefficient of is then calculated as: where is the covariance of and is the product of the two respective standard deviations. 4.3. Possible Values As anticipated earlier, the Spearman coefficient varies between -1 and 1. For , the bivariate distribution is monotonically increasing: Similarly, a distribution with a Spearman coefficient is monotonically decreasing: And lastly, a value of indicates that a function isn’t monotonic: Notice that all distributions sampled from even functions of the form have a Spearman correlation coefficient . 4.4. Interpretation of Spearman’s Correlation Coefficient We can finally discuss the interpretation of the value assumed by the Spearman correlation coefficient in a more formal manner. We saw earlier that it relates to monotonicity. Therefore, if we have a distribution sampled from a function of the form , the sign of points, in general, at the tonicity of that function. Spearman doesn’t however only apply to functionally dependent variables, since we can use it for all random variables. As a consequence, we need a definition that doesn’t rely on concepts taken from the analysis of continuous functions. Rather, we can interpret it as the degree by which two variables tend to change in the same direction. That is to say, variables with high correlation all increase and decrease simultaneously, while variables with low absolute correlation rarely decrease together. In other words, the correlation coefficient tells us, for any difference between two components of , the sign of the corresponding difference . In this sense, a correlation tells us that . Similarly, a coefficient implies . And lastly, a correlation coefficient means that neither of the previous expressions are true. 5. Interpretation of the Two Coefficients 5.1. The Values of the Correlation Coefficients We can now sum up the considerations made above and create a table containing the theoretical prediction that we can make about a bivariate distribution according to the values of its correlation coefficients: 5.2. Guessing Correlation Values We’re now going to use this table to conduct the inverse process. That is, to guess the correlation value of distributions on the basis of their shape. This can be helpful to formulate hypotheses on their values, which we can then test computationally. We do so by observing the shape of the distribution, compare it with the table we drafted above, and then infer the probable values of the correlation coefficients. The first distribution is this: We can note that it looks vaguely shaped as a linear function, and that it’s generally decreasing. For this reason, we expect its and values to be comprised in the interval . The real values for this distribution are and , which means that our guess was correct. The second variable has a shape that resembles the logistic function : Because it’s monotonically increasing, its value must be +1. It also seems that it doesn’t fit perfectly into a line but is generally approximable with a linear model. From this we derive that its value must be in the interval . The true values of the coefficients for this distribution are and , which means that we guessed correctly. The third distribution has the shape of a sinusoidal: The function can’t be monotonic, and it doesn’t seem to be increasing or decreasing. This means that the Spearman coefficient is approximately 0. It also doesn’t seem approximable by a linear model, which means that it’s Pearson coefficient should be close to 0, too. The real values for this distribution are and , which are indeed close enough to 0, as we expected. In this article, we studied the concept of correlation for bivariate distributions. We first approached the issue of the relationship between correlation and causality. Then, we studied Pearson’s correlation and its interpretation, and similarly Spearman’s correlation. In doing so, we learned that Pearson’s correlation relates to the suitability of the distribution for linear regression. Spearman’s coefficient, instead, relates to the monotonicity of a continuous function that would approximate the distribution.
21 CFR § 101.79 - Health claims: Folate and neural tube defects. (a) Relationship between folate and neural tube defects - (1) Definition. Neural tube defects are serious birth defects of the brain or spinal cord that can result in infant mortality or serious disability. The birth defects anencephaly and spina bifida are the most common forms of neural tube defects and account for about 90 percent of these defects. These defects result from failure of closure of the covering of the brain or spinal cord during early embryonic development. Because the neural tube forms and closes during early pregnancy, the defect may occur before a woman realizes that she is pregnant. (2) Relationship. The available data show that diets adequate in folate may reduce the risk of neural tube defects. The strongest evidence for this relationship comes from an intervention study by the Medical Research Council of the United Kingdom that showed that women at risk of recurrence of a neural tube defect pregnancy who consumed a supplement containing 4 milligrams (mg)(4,000 micrograms (mcg)) folic acid daily before conception and continuing into early pregnancy had a reduced risk of having a child with a neural tube defect. (Products containing this level of folic acid are drugs). In addition, based on its review of a Hungarian intervention trial that reported periconceptional use of a multivitamin and multimineral preparation containing 800 mcg (0.8 mg) of folic acid, and its review of the observational studies that reported periconceptional use of multivitamins containing 0 to 1,000 mcg of folic acid, the Food and Drug Administration concluded that most of these studies had results consistent with the conclusion that folate, at levels attainable in usual diets, may reduce the risk of neural tube defects. (b) Significance of folate - (1) Public health concern. Neural tube defects occur in approximately 0.6 of 1,000 live births in the United States (i.e., approximately 6 of 10,000 live births; about 2,500 cases among 4 million live births annually). Neural tube defects are believed to be caused by many factors. The single greatest risk factor for a neural tube defect-affected pregnancy is a personal or family history of a pregnancy affected with a such a defect. However, about 90 percent of infants with a neural tube defect are born to women who do not have a family history of these defects. The available evidence shows that diets adequate in folate may reduce the risk of neural tube defects but not of other birth defects. (2) Populations at risk. Prevalence rates for neural tube defects have been reported to vary with a wide range of factors including genetics, geography, socioeconomic status, maternal birth cohort, month of conception, race, nutrition, and maternal health, including maternal age and reproductive history. Women with a close relative (i.e., sibling, niece, nephew) with a neural tube defect, those with insulin-dependent diabetes mellitus, and women with seizure disorders who are being treated with valproic acid or carbamazepine are at significantly increased risk compared with women without these characteristics. Rates for neural tube defects vary within the United States, with lower rates observed on the west coast than on the east coast. (3) Those who may benefit. Based on a synthesis of information from several studies, including those which used multivitamins containing folic acid at a daily dose level of ≥400 mcg (≥0.4 mg), the Public Health Service has inferred that folate alone at levels of 400 mcg (0.4 mg) per day may reduce the risk of neural tube defects. The protective effect found in studies of lower dose folate measured by the reduction in neural tube defect incidence, ranges from none to substantial; a reasonable estimate of the expected reduction in the United States is 50 percent. It is expected that consumption of adequate folate will avert some, but not all, neural tube defects. The underlying causes of neural tube defects are not known. Thus, it is not known what proportion of neural tube defects will be averted by adequate folate consumption. From the available evidence, the Public Health Service estimates that there is the potential for averting 50 percent of cases that now occur (i.e., about 1,250 cases annually). However, until further research is done, no firm estimate of this proportion will be available. (c) Requirements. The label or labeling of food may contain a folate/neural tube defect health claim provided that: (1) General requirements. The health claim for a food meets all of the general requirements of § 101.14 for health claims, except that a food may qualify to bear the health claim if it meets the definition of the term “good source.” (2) Specific requirements - (i) Nature of the claim - (A) Relationship. A health claim that women who are capable of becoming pregnant and who consume adequate amounts of folate daily during their childbearing years may reduce their risk of having a pregnancy affected by spina bifida or other neural tube defects may be made on the label or labeling of food provided that: (C) Specifying the condition. In specifying the health- related condition, the claim shall identify the birth defects as “neural tube defects,” “birth defects spina bifida or anencephaly,” “birth defects of the brain or spinal cord anencephaly or spina bifida,” “spina bifida and anencephaly, birth defects of the brain or spinal cord,” “birth defects of the brain or spinal cord;” or “brain or spinal cord birth defects.” (D) Multifactorial nature. The claim shall not imply that folate intake is the only recognized risk factor for neural tube defects. (E) Reduction in risk. The claim shall not attribute any specific degree of reduction in risk of neural tube defects from maintaining an adequate folate intake throughout the childbearing years. The claim shall state that some women may reduce their risk of a neural tube defect pregnancy by maintaining adequate intakes of folate during their childbearing years. Optional statements about population-based estimates of risk reduction may be made in accordance with paragraph (c)(3)(vi) of this section. (F) Safe upper limit of daily intake. Claims on foods that contain more than 100 percent of the Daily Value (DV) (400 mcg) when labeled for use by adults and children 4 or more years of age, or 800 mcg when labeled for use by pregnant or lactating women) shall identify the safe upper limit of daily intake with respect to the DV. The safe upper limit of daily intake value of 1,000 mcg (1 mg) may be included in parentheses. (G) The claim shall state that folate needs to be consumed as part of a healthful diet. (ii) Nature of the food - (B) Dietary supplements. Dietary supplements shall meet the United States Pharmacopeia (USP) standards for disintegration and dissolution, except that if there are no applicable USP standards, the folate in the dietary supplement shall be shown to be bioavailable under the conditions of use stated on the product label. (iv) Nutrition labeling. The nutrition label shall include information about the amount of folate in the food. This information shall be declared after the declaration for iron if only the levels of vitamin A, vitamin C, calcium, and iron are provided, or in accordance with § 101.9 (c)(8) and (c)(9) if other optional vitamins or minerals are declared. (3) Optional information - (i) Risk factors. The claim may specifically identify risk factors for neural tube defects. Where such information is provided, it may consist of statements from § 101.79(b)(1) or (b)(2) (e.g., Women at increased risk include those with a personal history of a neural tube defect-affected pregnancy, those with a close relative (i.e., sibling, niece, nephew) with a neural tube defect; those with insulin-dependent diabetes mellitus; those with seizure disorders who are being treated with valproic acid or carbamazepine) or from other parts of this paragraph (c)(3)(i). (ii) Relationship between folate and neural tube defects. The claim may include statements from paragraphs (a) and (b) of this section that summarize the relationship between folate and neural tube defects and the significance of the relationship except for information specifically prohibited from the claim. (iii) Personal history of a neural tube defect-affected pregnancy. The claim may state that women with a history of a neural tube defect pregnancy should consult their physicians or health care providers before becoming pregnant. If such a statement is provided, the claim shall also state that all women should consult a health care provider when planning a pregnancy. (iv) Daily value. The claim may identify 100 percent of the DV (100% DV; 400 mcg) for folate as the target intake goal. (v) Prevalence. The claim may provide estimates, expressed on an annual basis, of the number of neural tube defect-affected births among live births in the United States. Current estimates are provided in § 101.79(b)(1), and are approximately 6 of 10,000 live births annually (i.e., about 2,500 cases among 4 million live births annually). Data provided in § 101.79(b)(1) shall be used, unless more current estimates from the U.S. Public Health Service are available, in which case the latter may be cited. (vi) Reduction in risk. An estimate of the reduction in the number of neural tube defect-affected births that might occur in the United States if all women consumed adequate folate throughout their childbearing years may be included in the claim. Information contained in paragraph (b)(3) of this section may be used. If such an estimate (i.e., 50 percent) is provided, the estimate shall be accompanied by additional information that states that the estimate is population-based and that it does not reflect risk reduction that may be experienced by individual women. (vii) Diets adequate in folate. The claim may identify diets adequate in folate by using phrases such as “Sources of folate include fruits, vegetables, whole grain products, fortified cereals, and dietary supplements.” or “Adequate amounts of folate can be obtained from diets rich in fruits, dark green leafy vegetables, legumes, whole grain products, fortified cereals, or dietary supplements.” or “Adequate amounts of folate can be obtained from diets rich in fruits, including citrus fruits and juices, vegetables, including dark green leafy vegetables, legumes, whole grain products, including breads, rice, and pasta, fortified cereals, or a dietary supplement.” (d) Model health claims. The following are examples of model health claims that may be used in food labeling to describe the relationship between folate and neural tube defects: (1) Examples 1 and 2. Model health claims appropriate for foods containing 100 percent or less of the DV for folate per serving or per unit (general population). The examples contain only the required elements: (i) Healthful diets with adequate folate may reduce a woman's risk of having a child with a brain or spinal cord birth defect. (ii) Adequate folate in healthful diets may reduce a woman's risk of having a child with a brain or spinal cord birth defect. (2) Example 3. Model health claim appropriate for foods containing 100 percent or less of the DV for folate per serving or per unit. The example contains all required elements plus optional information: Women who consume healthful diets with adequate folate throughout their childbearing years may reduce their risk of having a child with a birth defect of the brain or spinal cord. Sources of folate include fruits, vegetables, whole grain products, fortified cereals, and dietary supplements. (3) Example 4. Model health claim appropriate for foods intended for use by the general population and containing more than 100 percent of the DV of folate per serving or per unit: Women who consume healthful diets with adequate folate may reduce their risk of having a child with birth defects of the brain or spinal cord. Folate intake should not exceed 250% of the DV (1,000 mcg). The following state regulations pages link to this page.
The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. A standard reference used for comparisons is the 35 mm format, which is a sensor of size 36×24 mm. A standard wide angle lens would have around 28 to 35 millimeters based on the 35 mm format. The smaller the number, the wider the lens is.Close The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. The native focal length of the sensor cannot be used for comparisons between different cameras unless they have the same size. Therefore, the focal length in 35 mm terms is a better reference. For the same sensor, the smaller the number, the wider the lens is.Close Indicates the type of image stabilization this lens has: The horizontal field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close The vertical field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close Shows the magnification factor of this lens compared to the primary lens of the device (calculated by dividing the focal length of the current lens by the focal length of the primary lens). A magnification factor of 1 is shown for the primary camera, ultra-wide cameras have magnification factors less than 1, and telephoto cameras have magnification factors greater than 1.Close Physical size of the sensor behind the lens in millimeters. All other factors being equal (specially resolution), the larger the sensor the more light it can capture, as each physical pixel is bigger.Close The size (side) of an individual physical pixel of the sensor in micrometers. All other factors being equal, the larger the pixel size, the better the image quality is. In this case, each photoreceptor can capture more light and potencially can better differential the signal from the noise, yielding better image quality, specially in low-light.Close The maximum picture resolution this sensor outputs images in JPEG format. Sometimes, if the sensor can also provide images in RAW (DNG) format, they can be slightly larger because of an additional area used for calibration purposes (among others). Unfortunately, firmware restrictions for third-party apps also mean that the maximum picture resolution exposed to third-party apps might be considerably lower than the actual resolution of the sensor, therefore the resolution shown here is the maximum resolution third-party apps can access from this sensor.Close The available output picture formats this camera is able to deliver: The focusing capabilities of this camera: It displays whether this lens can be set to focus at infinity or not. Even if the camera supports autofocus and manual focus, it might happen that the focus range the lens is able to adjust to does not include the infinity position. This property is important for astrophotography, as in such low-light scenarios the automatic focus does not work reliably.Close The distance from which objects that are further away from the camera always appear in focus. Therefore, if the camera is set to focus at infinity, any object further away from this distance will appear in focus.Close The range of supported manual exposure in seconds (minimum or shortest to maximum or longest). This camera might support exposures outside this range, but only in automatic mode and not in manual exposure mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer longer or shorter exposures times.Close The range of supported manual sensitivity (ISO). This camera might support ISO sensitivities outside this range in automatic mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer an extended manual sensitivity range.Close The maximum ISO sensitivity possible in manual mode is usually reached by using digital amplification of the signal from the maximum supported analog sensitivity. This information, if available, will let you know what is the maximum analog sensitivity of the sensor.Close The data on this database is provided "as is", and FGAE assumes no responsibility for errors or omissions. The User assumes the entire risk associated with its use of these data. FGAE shall not be held liable for any use or misuse of the data described and/or contained herein. The User bears all responsibility in determining whether these data are fit for the User's intended use.
What Does Biodegradable Mean? This is a term used to refer to a substance that can undergo decomposition through natural agents. Once the substance has been broken down, it becomes harmless to humans and the environment. This property has led to the development of biodegradable metals as biomaterials for making bioactive materials in the medicinal industry. It is applicable in implants when material science is required to provide temporary aid to diseased tissues through their healing process while continuously degrading. Corrosionpedia Explains Biodegradable Biodegradable metals are specifically obtained from the alloys of magnesium and iron metals. In the science of biomaterial, there is always a need for coming up with a corrosion-resistant metal alloy as temporary implant material. The context of defining the corrosion resistance mechanical properties in these alloys is the surrounding conditions inside a human body. This means that the alloy will be placed in a human body, which contains oxygen and moisture (factors that initiate corrosion). Therefore, the aim of bioactive biomaterial and biodegradable metal alloys is to prevent the formation of oxides and hydroxides, which might be a negative impact to any temporary implant. Material science engineering has been at the forefront of provident tangible information for bio-materialists to decide on the best material in making metallic implants. Apart from metals, polymers are also used as a biomaterial but the metal has higher mechanical and physical properties (high strength to bulk ratio) that makes it a preferable selection.
Uveitis is a general term for an inflammatory response in the eye that can be caused by a broad range of diseases or conditions. It is called uveitis because the area that is inflamed is the uvea, although the condition can also affect other areas in the eye such as the lens, the optic nerve, the retina and the vitreous. Uveitis can cause swelling and tissue damage and lead to reduced vision or in more serious cases, even blindness. What is the Uvea? The uvea is a layer in the middle of the eye containing three main elements including: the choroid, which is a network of small blood vessels which provides nutrients to the retina; the iris, which is the colored layer around the pupil; and the ciliary body which produces fluid to shape the lens and provide nutrients to keep it healthy. Types of Uveitis Uveitis is classified by four different types, depending on the location of the inflammation within the eye. Anterior uveitis, which is the most common form, is when the iris is inflamed, sometimes in combination with the ciliary body. Intermediate uveitis is inflammation of the ciliary body and posterior uveitis is when the choroid is inflamed. When the entire uvea is inflamed, this is called diffuse or pan-uveitis. Symptoms of Uveitis Uveitis generally affects individuals between the ages of 20 and 50 and can present a variety of symptoms depending on the cause. The condition can affect one or both eyes and sometimes the symptoms can come on very rapidly. They include: - Blurred vision - Eye pain - Red eyes - Light sensitivity - Seeing floaters in the field of view If you experiences these symptoms seek medical attention immediately. Uveitis is usually a chronic disease which can lead to vision loss as well as other eye problems such as glaucoma, retinal detachment and cataracts. Causes of Uveitis The cause of uveitis is still somewhat of a mystery. It is often found in connection with eye injuries, viral infections, toxins or tumors in the eye or with systemic autoimmune disorders (such as AIDS, rheumatoid arthritis or psoriasis), or inflammatory disorders (such as Crohn’s disease, colitis or Multiple Sclerosis). Treatment for Uveitis Uveitis treatment is designed to reduce and eliminate inflammation and pain and to prevent damage to the tissues within the eye, as well as to restore and prevent vision loss. The inflammation is typically treated with anti-inflammatory steroid eye drops, pills, dissolving capsules or injections, depending on where the condition presents in the eye. Additional medications or treatments may be prescribed depending on the cause of the condition. For example, when the cause is an autoimmune disease, immunosuppressant medications may also be used. If there is a viral infection or elevated intraocular pressure, appropriate medications will be given to treat those issues. Often uveitis is a chronic disease so it’s important to see the eye doctor any time the symptoms appear.
To prevent the spreading of COVID-19 pandemic, most governments across the world have taken measures to close schools, resulting in distance learning. During this time, it has become clear that different school realties can impact students’ education and widen existing differences between students even more, which has brought issues of educational justice and equal opportunity to the fore. While many schools have risen to the challenge and developed effective distance learning forms, others have fallen behind, leaving some learners without access to quality education. Studies have shown that the COVID-19 crisis has exacerbated the exclusion of students with disabilities from education, showing they are least likely to benefit from distance learning (UN, 2020). Teachers had to adapt to new pedagogical concepts and modes of delivery of teaching for which they either have not been trained in or did not have sufficient guidance or resources on how to include students with disabilities in distance learning. Responding to this problem, the project “Digitalisation and inclusive education: Leaving no one behind in the digital era” (DigIn), will increase the participation of students with various disabilities in digital education and respond to the “Innovative practices in a digital era” priority by strengthening the profiles of teachers and hence foster social inclusion. The goal is to empower and professionalise teachers from various age groups and different school types not only in the field of digital education but also in inclusive education. Recognising this need, the proposed project sets out to fill the gap. The project objectives are to: • Design and implement a teacher training for in-service teachers that will foster teachers’ digital competence and increase their capacity to support students with disabilities in either the inclusive classroom, in a blended learning format or during distance learning. • Offer the teacher training after the project ends as an online resource for all interested pre-service and in-service teachers. • Design new techniques and materials for including students with disabilities in digital education, including first-person accounts from teachers, teaching videos and lesson plans and best-practice examples. • Develop tools that will offer advice and guidance for teachers about the accessibility of existing tools (e.g. communication platforms, learning apps, etc.) based on the Universal-Design for Learning criteria and their pedagogical strategies (Hall, Meyer, & Rose, 2012). • Evaluate the digital potential and inclusive practices of schools. To achieve these objectives, the DigIn project is a collaboration between six partners in four countries: Austria, Bosnia and Herzegovina, Italy and North Macedonia. The project partnership consists of three higher educational institutions, a school and two non-governmental organizations working in the field of teacher education. The countries have different educational systems and resources, which allows for different ways to include students with disabilities in digital education under various circumstances to be uncovered. The substantial differences between the participating countries—in terms of the development-stage of inclusive education and experiences with digital education—and the actor constellations—between universities, schools and non-governmental organisations—are essential for the strategic partnership. These differences represent a significant resource for DigIn due to the possibility of comparing, discussing, exchanging and reflecting upon examples and experiences in the field of digital education and inclusive education. Transnational research will be conducted throughout the project. The intended outcomes are: 1. Teacher training with a total of five modules 2. Best-Practice-Examples Toolkit 3. To(ol)-Check instrument 4. In(novation)-Check instrument 5. One cross country comparison of the digital potential of inclusive schools using the SELFIE tool 6. Case studies (four in total) 7. Five academic publications that will report the finds internationally. The project has been funded with support from the European Commission. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.
Scientists have known about the shape of DNA molecules for several decades, but for the first time, a photograph of the molecule has been taken, spiral and all. Watson and Crick first modeled the composition of the DNA molecule in 1953, identifying it as a double helix structure composed of guanine, adenine, thymine and cytosine. Previously, a technique X-ray crystallography has been used to convert dots into an overarching image of the molecule. However, it isn't until not that it has been directly photographed. Using an electron microscope, a picture was snapped of a DNA strand which had been stretched out and suspended between two nanoscopic silicon pillars. The DNA, strung up between the pillars The photographer of the image is Enzo di Fabrizio from the University of Genoa, Italy. He separated a single DNA string in a solution by introducing the aforementioned pillars, which absorbed the water in the solution, leaving the DNA molecule left behind, strung up like a clothes line. By drilling holes in the base plate for the pillars, he could fire beams of electrons at the DNA, illuminating it, in a sense. The double helix, finally on a photo. In the future, Fabrizio's technique will allow scientists to study DNA in more detail, including how it reacts to RNA and proteins
- What does inclusion mean? What about exclusion? - If you aren’t sure what inclusion and exclusion mean, here are some definitions: inclusion means being part of a group with other people and being a welcomed member who belongs, without having to change things about yourself like your race, religion or abilities. Exclusion means being shut out of a group or not being welcomed to a group, often because of things about yourself that you cannot change. People may be excluded because of their race, religion, gender or anything else that makes them different. - How does it feel to be included? How does it feel to be excluded? - Is everyone included in Scouting? Is everyone included in Canada? Has this always been true? - Give everyone a piece of paper and some drawing supplies. - Take five minutes to draw a picture of a time that you felt included in a group – this could be something that happened in Scouting, at school or in your community. As you draw, think about how it felt to be included. - Then, take five minutes to draw a picture of a time that you felt excluded or ignored. Like your picture of inclusion, this can be something that happened to you anywhere. Think about how it felt when you were excluded. - If you’re comfortable, share one of your drawings with the group. As people share, think about things that you have in common, like shared experiences or feelings. Think about things that are unique. - Keep in mind, that even when you have things in common, everyone’s experiences will be unique in some ways. - After you’ve talked about how it felt to be included and excluded watch the videos in the resources section. Were the people in the videos included or excluded? Does this change your answer to “is everyone included in Scouting” or “is everyone included in Canada?” - How easy was it to remember a time that felt included or excluded? - Now that you’ve thought more about times that you were included and excluded, did your answers to the questions “how does it feel to be included?” and “how does it feel to be excluded?” change at all? - What are some of the different ways someone might feel excluded in your group? Do you know of any examples of people being excluded somewhere in Canada? - What actions can we take to make sure that no one feels excluded in our Group? - As Scouts, we have a lot of power – sometimes we exclude people or hurt their feelings by accident. This is why it’s so important to think about how we feel and learn about how others feel so that we can work to make sure that we are including everyone, rather than excluding anyone. Remember to submit your activities on our Scouts for Sustainability Take Action Map Keep it Simple - Brainstorm some words that describe how it feels to be included and to be excluded. Write these ideas down on a piece of paper that everyone can see. There is no right or wrong answer, since everyone might feel differently. Have everyone think of a time that they felt included or excluded from something – if people are comfortable, have them share. What can you do as a Section to make sure that everyone feels included? Take it Further - Watch this video on the reclamation of nature by people of colour – think about why the outdoors might be inaccessible to some communities. How does this apply to Scouting? What can you take away from this and apply to your own Scouting experience and how you work to include others? - Throughout Canada’s history, there are many groups of people who have been excluded for many different reasons, like their race, religion, cultural background or their beliefs. February is Black History Month, which celebrates the achievements and contributions of Black Canadians. Don’t wait for February to come around - learn more about Black history in Canada and explore different contributions by Black Canadians throughout history. What does Black History Month have to do with inclusion or exclusion in Canadian history?
If you have ever spent any time with preschoolers, you know that their curious minds are incessant. “Why is that squirrel eating a nut? Where do you think it lives? Why are there nuts on that tree? Where are the other squirrels? Why? Where? How come?” For us parents, these endless questions can be exasperating. At the end of a long day, “Because I said so!” is about all you can muster in response. However, stepping away from the tediousness of explaining a squirrel’s complex relationship with the forest, we realize that curiosity is an essential ingredient in life. This intrinsic motivation to learn is certainly a core driver of achievement in the classroom. Inside curious minds A team of neuroscientists at the University of California-Davis have begun peering into the inner workings of the curious mind. Participants in their lab were asked to rank a list of trivia questions based on how curious they were to know the answer. Next, while in an MRI machine, they reviewed a selection of both the interesting and uninteresting questions along with their answers. Not surprisingly, when participants reviewed questions that made them most curious, their brains’ pleasure and reward systems lit up. In other words, curiosity made them feel good. Activity also increased in the hippocampus, which is central to the formation of memories. They were more likely to remember the answers to quiz questions that they were curious about. The most surprising part of this experiment was that curiosity seemed to help participants remember unrelated information as well. When participants were shown images of faces in between the questions and their answers, they were more likely to remember the expression on the face that followed an intriguing question (in addition to the answer to the question itself). Faces that followed uninteresting questions faded from memory. It turns out that curiosity helps information stick. Curious minds are fuel for learning and retention. Your child’s learning edge Before we could glimpse the inner workings of the brain, George Loewenstein described curiosity as an “information gap” that produces a feeling of deprivation, a feeling that we are driven to fill. For young children, information gaps abound. Thus the incessant questioning of the squirrel, the nuts, the trees and on and on. As our children grow up, however, information gaps persist. This is what I describe to my undergraduate students as their “learning edge,” the space just beyond their competency (and comfort) level. While the incessant questioning of early childhood may fade, we would be wise to encourage our children to keep probing the far edges of their knowledge. We would be wise to encourage them to approach the world with curious minds and questions. Questions that as they grow get more broad, deep and complicated. Kids and teens who are hungry for knowledge learn about and retain more than just the answers to their original questions. - Model curiosity. Instead of showing off that you have all the answers, model that you are curious about the world too. Express wonder, ask questions big and small, and delight in investigating their answers. - Encourage investigation, not just answers. It can be temping to simply answer all of your child’s questions. It is fine to share knowledge with your child but don’t forget to encourage them to develop their questions and explore them as well. - Let your child’s interests be your guide. You can’t very well force your child to be intrigued by something. Instead, let them take the lead. - Embed “boring” work inside an interesting framework. The lesson of the faces inside these experiments teaches us that even unrelated or “boring” information sticks with us when we are in a curious state. Take advantage of this and see if you can give information a more intriguing context. - Tell a story. How is this story going to end? What happens to the central characters? Storytelling ignites the curious mind.
Zika virus is a virus spread to people through mosquito bites of Aedes species mosquitoes; sexual transmission of Zika virus is also known to occur. Aedes mosquitoes also spread dengue and chikungunya viruses. Outbreaks of Zika virus disease have occurred in Africa, Southeast Asia, and the Pacific Islands. There have been reports of limited, local mosquito-borne transmission of Zika virus in certain areas of the United States; however, no local mosquito-borne transmission of Zika virus has been identified in Maryland at this time. The species of Aedes mosquito responsible for most Zika virus transmission, Aedes aegypti, is not commonly found in Maryland. The most common symptoms of Zika virus disease are fever, rash, joint pain, and conjunctivitis (red eyes). The illness is usually mild with symptoms lasting from several days to a week. Severe disease requiring hospitalization is uncommon. More information about Zika can be found on the MDH Zika Virus Fact Sheet. Information for Pregnant Women & Families 201 W. Preston Street, Baltimore, MD 21201-2399 (410) 767-6500 or 1-877-463-3464
Free research paper sample on Native Americans: Chief Crazy Horse said, “We did not ask you white men to come here” (DiBacco 305). They fought hard; however, the Native Americans were not able to stop the white settlers from removing the Indians from their homeland, killing thousands of them, or forcing them to assimilate into the American culture. First, the U.S. government created policies to remove and concentrate the Native Americans somewhere else. With the transcontinental railroad being finished in 1869, it gave more white settlers the opportunity to get land in the frontier (Dibacco 306). There was a problem because “they asked how two cultures so different from each other could live side by side” (DiBacco 306). The Indians knew that if they did not fight, they would lose their land. One plan was concentration. The attempted to keep the Indians in one specific area in the West. The Native Americans could live as normal, but within those borders. Hopefully this would decrease the fighting between the Native Americans and the whites. After the Civil War, the government policy was modified. They now moved the Indians onto reservations. “Most reservations were too small to support the hunting way of life. Therefore, the Indians were suppose to get food though… farm[ing], although reservations were located on the poorest land” (Todd 491). The Indians were swindled by the whites. The Americans did things like mix flour and sawdust or steal goods and then sell them instead of handing the out to the Indians like they were suppose to do (Jordan, Americans 415). The Indians were just trying to cooperate with the treaties they signed; however, they were being cheated. Most of the Native Americans “were nomadic and nonagricultural, and all depended for survival on hunting the …buffalo” (Jordan, United 420). Their everyday lives “revolved around the buffalo hunt” (Jordan, United 420). The settlers had realized that the buffalo hide could be made into leather. They also saw buffalo hunting as a fun pastime. The whites killed an estimated three million buffalo each year over a three year period, and it hurt the Indians because they were forced to change much of their daily lives (Jordan, United 425). Additionally, most of the Native Americans either starved while living in the reservations or were killed in fighting. The government’s plan collapsed for two reasons. First, the Indians needed to buffalo to survive, so they had to leave the reservation to get buffalo. Second, because of the gold found in Colorado in 1858, many people traveled Westward and did not care for the Indian’s rights (DiBacco 306). Unhappy with the land they received, the Indians had no choice but to revolt. They would have died from starvation otherwise. The Indians were also agitated by the Americans because the Americans were not holding up their part of the deal. Also, some groups refused to leave their homeland. The government tried to move the Indians out of the way, but it was not effective. Second, the government then tried just to exterminate the Native Americans. There were many battles between Indians and Americans. The leaders were the Sioux and the Cheyenne (Jordan, Americans 415). At one point, Chief Black Kettle of the Cheyenne had agreed to cease fire. he hung the American flag and the white flag of surrender. However, Colonel Chivington did not know about the armistice and attacked the Cheyenne, killing 450 Native Americans (DiBacco 306). It was called the Sand Creek Massacre. The Sioux Indians also had many battles with white settlers. After invading a white settlement in 1862-1863, the Sioux Indians lost their leader, Little Crow (DiBacco 306). The Sioux War finally came to an end in 1868 (DiBacco 306). Although the Indians were technologically at a disadvantage, they had “resistance [that] was remarkable” (Todd 493). The Sioux were finally guaranteed land in Black Hills South Dakota. However, in 1876 gold was discovered there, and the Sioux were instructed to be moved again. The removal was under the control of General George Custer, a well-known Indian warrior. In June of 1876, he struck a Sioux and Cheyenne camp. This group of “warriors had two outstanding leaders. One was Sitting Bull, able, honest, and idealistic. The other was Crazy Horse, uncompromising, reckless, a military genius, and the most honored hero of the Sioux” (Todd 493). Custer and all 264 of his troops were killed. This was the last big loss of the Americans, and created quite a discomfort for the United States government (Todd 493). “In 1889 the Sioux made one more attempt to keep their way of life” (Jordan, Americans 418). The troops engaged in one more “battle” even though it was truly a massacre. The Battle of Wounded Knee took place in 1890 (Jordan, United 425). This battle was the final fight of the Indians against the United States military. In the end approximately 200 Native American men, women, and children had been killed (DiBacco 308). This extermination policy set up by the government was successful; however, thousands of Native Americans died. Last, the government thought that the Indian’s needed to be assimilated with the American culture. The Indian’s way of life was completely destroyed. Most of the Americans did not accept or respect the Indian cultures. Most people believed that if the Indians were to survive any longer in the United States, they would have to have the same habits and traditions of the Americans. The Native Americans had to be absorbed into the white culture. The government funded churches and schools for the Native Americans. They wanted to educate the Indian children how to talk, dress, work, and think like whites. The American government passes the Dawes Act in 1887 (Jordan, United 425). The act basically divided up the reservations and each family was given their own land to cultivate. After 25 years, the family would own the land and have citizenship in the United States. The Dawes Act really did not help out the Indians at all, because the quality of the land was very poor and they were untrained and didn’t have any tools. Disease and malnutrition was very common and many people died. The badly trained and uncharitable teachers taught the Indian children that being an Indian was a bad thing, and that they are worthless. Despite the fact that it sounded like a good deal, assimilation failed. Free research papers, free research paper samples and free example research projects on Native Americans topics are plagiarized. EffectivePapers.com is professional research paper writing service which is committed to write top-quality custom research papers, term papers, essays, thesis papers and dissertations. All custom research papers are written by qualified Master’s and PhD writers. Just order a custom written research paper on Native Americans at our website and we will write your research paper at affordable prices. We are available 24/7 to help students with writing research papers for high school, college and university.
After the last coal generator came off the National Grid Electricity System at 1.24pm on 1 May, Britain has had its first week without using coal to generate electricity for the first time since 1982. The pressure to increase renewable energy sources and high international coal prices have led to the decrease in fuel usage. Director of National Grid ESO, Fintan Slye, believes that the UK’s electricity system could run with zero carbon by 2025, far surpassing the current target of net-zero emissions by 2050. Although coal-fired power will still act as backup energy at times of high demand, there is hope that the increasing introduction of renewable energy sources will soon make coal redundant. This is a significant step towards tackling climate breakdown. CO2 emissions are the number one cause of climate change and their reduction is intrinsic to regaining the health of our planet. The UK's success towards achieving its targets could pave the way for energy systems in other countries and inspire more action toward a zero carbon emissions planet. In order for this to become a reality, investment into renewable energy systems must be a priority. Challenges that face building infrastructures such as offshore wind farms and domestic solar panels include increasing consumer participation. This can be overcome by using smart digital systems to control from a distance.
At Key Stage 3, History is taught in mixed ability groups with three lessons a fortnight. We follow a chronological approach from 1066 to post-1945 with a focus on one or two key enquiry questions each term. There are opportunities for special projects in each year with the Castle Challenge in Year 7, the Great Exhibition in Year 8 and a research task on local history and the First World War in Year 9. History is a popular option at Key Stage 4 where it is also taught in mixed ability groups. We have five lessons a fortnight. From September 2016 we have studied the new Edexcel History (9-1) course. We have chosen units on Elizabethan England, the History of Medicine (with a focus on medicine in the First World War trenches), the American West and Weimar and Nazi Germany. Students have the opportunity to visit the First World War battlefields near Ypres. History is also popular at A level where students have nine lessons a fortnight. All groups have two teachers. We have begun our new A level course in September 2015 and have chosen to study modern American History, 1865-1975 and Tudor History, 1529-1570. A Level students also complete a piece of coursework based on Russian History 1855-1956. Members of the department are also involved with teaching government and politics A level, sociology A level and the EPQ (Extended Project Qualification). For more information about each key stage, please click on each of the sections below. Key Stage 3 (Year 7, 8, 9) History is taught in mixed ability classes at Key Stage 3. Pupils study the period 1066 to the present day. In Years 7 and 8 the emphasis is on British History and topics include the Middle Ages, the Tudors and Stuarts and the Industrial Revolution. In Year 9 the pupils study the Modern World since 1914 including the First and Second World Wars, Nazi Germany and American Civil Rights. As well as building up their factual knowledge, they learn how to explain causes and consequences in History and how to understand why there are different interpretations of the past. They develop skills in analysing historical sources and write their own descriptions and explanations. They are encouraged to plan their own historical investigations and further develop their presentational skills on paper, orally and using ICT. These skills all contribute to further study at GCSE level. Key Stage 4 (Years 10, 11) The history course has five components: A British depth study - Early Elizabethan England 1558-1588 A thematic study - Medicine through Time Historical environment - Medicine in the trenches Period study - The American West 1840-1895 Modern Depth study - Weimar and Nazi Germany, 1918-1939 In Year 10, students learn about Early Elizabethan England and Medicine Through Time. They also complete a study of medicine on the First World War battlefields and have an opportunity to visit Ypres and the surrounding area. In Year 11, students follow an in-depth study of a crucial period in the development of the USA, The American West, 1840-1895. This is followed by a unit on Weimar and Nazi Germany, 1918-1939. The course develops important analytical skills. Students learn how to interpret a variety of sources and how to weigh up different factors when constructing an argument. Students will build on these skills when moving onto our A level course in Year 12, which includes America 1865-1975, Tudor History and coursework on Modern Russian History. Our new A level course builds on the knowledge students have already acquired and extends it. It is designed to provide an the experience of studying two periods of history in depth, with a focus on British, American and Russian history. Students are well supported by a very well qualified and experienced department. For more information about this subject at KS5, please click here to view the relevant subject leaflet at the bottom of the page.
Articles, Blogs, Whitepapers, Webinars, and Other Resources An Insight into Weather Forecasting using Machine Learning and Artificial Intelligence Weather forecasting is the task of predicting the state of the atmosphere at a future time and a specified location. Traditionally, this has been done through physical simulations in which the atmosphere is modeled as a fluid. The present state of the atmosphere is sampled, and the future state is computed by numerically solving the equations of fluid dynamics and thermodynamics. However, the system of ordinary differential equations that govern this physical model is unstable under perturbations, and uncertainties in the initial measurements of the atmospheric conditions and an incomplete understanding of complex atmospheric processes restrict the extent of accurate weather forecasting to a 10-day period, beyond which weather forecasts are significantly unreliable. What is Machine Learning? Machine learning, is relatively robust to perturbations and doesn’t require a complete understanding of the physical processes that govern the atmosphere. Therefore, machine learning may represent a viable alternative to physical models in weather forecasting. Before the advancement of Technology, weather forecasting was a hard nut to crack. Weather forecasters relied upon satellites, data model’s atmospheric conditions with less accuracy. Weather prediction and analysis has vastly increased in terms of accuracy and predictability with the use of Internet of Things, since last 40 years. With the advancement of Data Science, Artificial Intelligence, Scientists now do weather forecasting with high accuracy and predictability. How Machine Learning helps in Prediction of weather related events? There are many types of machine learning algorithms, of which two are most important in predicting the weather, which are Linear Regression and a variation of Functional Regression. These models are trained based on the historical data provided of any location. Input to these models are provided such as if predicting temperature, then minimum temperature, mean atmospheric pressure, maximum temperature, mean humidity, and classification for 2 days. Based on this Minimum Temperature and Maximum Temperature of 7 days will be achieved. What is Classification? When collecting datasets to provide to the models there are certain parameters which are called as classified data which includes: snow, thunderstorm, rain, fog, overcast, mostly cloudy, partly cloudy, scattered clouds, and clear. These can be further classified into four classes. - 1. Rain, thunderstorm, and snow into precipitation - 2. Mostly cloudy, foggy, and overcast into very cloudy - 3. Scattered clouds and partly cloudy into moderately cloudy - 4. Clear as clear How Algorithms are used in Predicting Weather? There are various techniques of predicting weather using Regression and variation of Functional Regression, in which datasets are used to perform the calculations and analysis. To Train the algorithms ¾ size of data is used and ¼ size of data is termed as Test set. For Example, if we want to predict weather of Austin Texas using these Machine Learning algorithms, we will use 6 Years of data to train the algorithms and 2 years of data as a Test dataset. On the contrary to Weather Forecasting using Machine Learning Algorithms which is based primarily on simulation based on Physics and Differential Equations, Artificial Intelligence is also used for predicting weather: which includes models such as Neural Networks and Probabilistic model Bayesian Network, Vector Machines. Among these models Neural Network is widely used due to its ability to capture non-linear dependencies of past weather trends and future weather conditions. However, certain machine learning algorithms and Artificial Intelligence Models are computationally expensive, such as using Bayesian Network and machine learning algorithm in parallel. To conclude, Machine Learning and Artificial Intelligence has greatly change the paradigm of Weather forecasting with high accuracy and predictivity. And within the next few years more advancement will be made using these technologies to accurately predict the weather to prevent disasters like hurricane, Tornados, and Thunderstorms.
fool’s literature, allegorical satires popular throughout Europe from the 15th to the 17th century, featuring the fool, or jester, who represented the weaknesses, vices, and grotesqueries of contemporary society. The first outstanding example of fool’s literature was Das Narrenschiff (1494; “The Ship of Fools”), a long poem by the German satirist Sebastian Brant, in which more than 100 fools are gathered on a ship bound for Narragonia, the fools’ paradise. An unsparing, bitter, and sweeping satire, especially of the corruption in the Roman Catholic church, Das Narrenschiff was translated into Latin, Low German, Dutch, and French and adapted in English by Alexander Barclay (The Shyp of Folys of the Worlde, 1509). It stimulated the development of biting moral satires such as Thomas Murner’s poem Narrenbeschwörung (1512; “Exorcism of Fools”) and Erasmus’ Encomium moriae (1509; In Praise of Folly). The American writer Katherine Anne Porter used Brant’s title for her Ship of Fools (1962), an allegorical novel in which the German ship Vera is a microcosm of life.
Pygmies, the most well-known group of diminutive humans, whose men on average grow to a maximum of five feet tall and their women about a half foot shorter, were thought to be endowed with their characteristic small body sizes due to poor nutrition and environmental conditions. But the theories did not hold up, given that these populations—primarily hunter–gatherers—are found mostly in Africa but also in Southeast Asia and central South America, and thereby are exposed to varying climates and diets. Further, other populations who live under conditions of low sustenance, such as Kenya's Masai tribes, are among the world's tallest people. So what could account for these pockets of people who grow so small? According to University of Cambridge researchers, the key is the pygmies' life expectancy. "After going to the Philippines and interviewing the pygmies, I noticed this very distinctive feature of the population: very high mortality rates," says Andrea Migliano, a research fellow at Cambridge's Leverhulme Center for Human Evolutionary Studies and co-author of a new study published in Proceedings of the National Academy of Sciences USA. "Then, going back to life history theory, we noticed that their small body size was really linked to high mortality." Migliano and her colleagues began their study by comparing the growth rates of two Filipino pygmy groups (the Aeta and the Batak) with data from African pygmies as well as from East African pastoralist (livestock-raising) tribes like the Masai and the lower echelon of the U.S. growth distribution (in essence, malnourished Americans). All these groups have low nutritional status but reach significantly different average height levels. The U.S. population showed the greatest growth rate, whereas both the pygmies and African pastoralists lagged behind. Although the pygmies plateaued around 13 years of age, the pastoralists kept growing, reaching their cessation point into their early twenties. Because the pygmy growth rate approximated the taller pastoralists, but had an earlier end point, the researchers concluded that their growth was not nutritionally stunted. The group next examined the incredibly low life expectancy of different pygmy populations, ranging from roughly 16 to 24 years of age. (Pastoralists and other hunter–gatherer populations experience expectancies that are nearly one to two decades longer—a number that is still low, especially when compared with the 75- to 80-year life span expected of Americans.) Pygmies also reach their age of last reproduction a few years earlier than their taller counterparts, although there are many more pastoralist women than pygmies who reach this age at all. Looking at fertility curves, the researchers noted Aeta appeared to reproduce on average when they were around 15 or 16 years old, which is about three years earlier than other hunter–gatherers. The tallest of these populations actually appeared to reproduce the latest. By having an early onset of reproductive abilities, the scientists say, the pygmies appear to trade off time spent growing, allowing them to continue on in the face of low life expectancy. "Although the challenges posed by thermoregulation, locomotion in dense forests, exposure to tropical diseases, and poor nutrition do not account for the characteristics of all pygmy populations," the authors wrote, "they may jointly or partially contribute to the similarly high mortality rates in unrelated pygmy populations." This research centered around women, but Migliano expects an analysis of males to mirror that of females, partly because the fertility of one would affect the other. Further, life history theory is anchored to the female because of the importance of reproduction as a variable. She adds that this paradigm could be used to help better understand the evolution of Homo floresiensis, the so-called "hobbit" found on the Indonesian island Flores in 2003. "I think there is a great potential to use the theory to understand changes in body size during hominid evolution, such as the size of the hobbits and the relatively larger size of erectus," Migliano says. "But my main objective is to apply the theory to the understanding of the current human diversity."
When you approach a problem by first stripping it down to its most elemental parts, that's a “first principles” approach to problem-solving. The authors of the Declaration of Independence demonstrate this approach. The relationship between Great Britain and the thirteen colonies was fraught with debate and complexity in 1776, and to explain their solution to that knotty mess, the founders laid out their first principles: “We hold these truths to be self-evident, - “That all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. - “That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed. - “That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.” From here, the authors move on to justify their declaration of independence. Hindsight gives us a painfully clear picture of how inadequately they applied these principles in constructing their own government (consider the degree to which those “inalienable rights” would remain alien to enslaved peoples, women, and Amerindian groups in this newly independent nation, for example), but their first principle thinking inarguably put them at the front end of a revolutionary trend — within decades, revolutions would spring up throughout most of the Atlantic world. Here's the point: First principles make it possible for us to see opportunities where others see an impasse. They let us think clearly in the face of interminable complexity. So: What is the first principle of education? Here's my take: In light of the perpetual onslaught of initiatives and fads and research and “research” and tasks and minutiae we face daily, we must be clear on the first principle of education. Schools exist to promote the long-term flourishing of students. This is the core of our work, beneath the philosophical debates and stylistic differences and policy wars and standards and so on. Long-term flourishing (or LTF, as my colleagues and I sometimes refer to it) — that's what we do. May it be emblazoned on our hearts as a charm against the overwhelm.
The Polonaise was a peasant dance from Poland that gained popularity in the early 18th century among composers and high society. Bach and Handel both wrote movements marked ‘Polonaise’, and in the early 19th century examples can be found in the finales of Beethoven’s Triple Concerto, and Field’s Third Piano Concerto. Chopin grew up with polonaise and other forms of traditional Polish music. His teacher, Elsner was a Polonaise composer, and the business of writing and publishing sets of polonaises was a lucrative one. There was another reason why such a traditional was so popular in Poland at this time; the country had once again been robbed of its independence, and the nationalistic pulling power of such music helped keep the national identity and spirit alive. Chopin’s genius took the simple tunes of the Polonaise and allowed him to create large-scale complex and dramatic works with myriad emotions. In many ways they captured the Polish spirit which remained defiantly unbroken. - Folke Nauta was born in 1973 and has won many prizes in key competitions around Europe.
Leonardo’s famous portrait has long been thought to be of Lisa Gherardini, the wife of a Florentine silk merchant. It is thought the Renaissance master painted it between 1503 and 1517 while he worked in Florence, and then later in France. However, Cotte has used Layer Amplification Method (LAM), a process of projecting a series of intense lights on to the painting and measuring their reflections to see how it was created. Cotte, a LAM pioneer, explained in the BBC Two documentary The Secrets of the Mona Lisa: “We can now analyse exactly what is happening inside the layers of the paint and we can peel like an onion all the layers of the painting. We can reconstruct all the chronology of the creation of the painting.” After reconstructing the layers found underneath the surface of the Mona Lisa, Cotte said: “I was in front of the portrait and she is totally different to Mona Lisa today. This is not the same woman.” Instead of the front-on gaze of the Mona Lisa, this hidden portrait shows a woman looking off to the side. Cotte also claims the secret sitter has a larger head and nose, bigger hands and, importantly, smaller lips than those used for the famous Mona Lisa smile. The change in sitter could be the key to a totally different history behind the portrait. Andrew Graham Dixon, the art historian presenting the documentary, claims that “if this computer image represents the original portrait of Mona Lisa, it was a portrait her husband never received. Instead, Leonardo went on to paint the world’s most famous picture over the top.” Some historians, however, are reluctant to believe the hidden portrait is of another woman. Martin Kemp, Emeritus Professor of the History of Art at the University of Oxford, says Cotte’s claims are “untenable”. He told the BBC: “[Cottes images] are ingenious in showing what Leonardo may have been thinking about. But the idea that there is that picture as it were hiding underneath the surface is untenable. “I do not think there are these discreet stages which represent different portraits. I see it as more or less a continuous process of evolution. I am absolutely convinced that the Mona Lisa is Lisa.” The history of the world’s most famous painting has been riddled with identity crises, however. Within decades of the Mona Lisa being painted, speculation began that there were a pair of similar portraits. In 1584, an artist named Giovanni Paolo Lomazzo wrote that “the two most beautiful and important portraits by Leonardo are the ‘Mona Lisa’ and the ‘Gioconda.’”
Trying to figure out who the first supermassive black holes of the universe were born is not a simple task and it is something that scientists have been trying to discover for years. This time it appears that they might have cracked the mystery. A new study indicates that the “halos” of dark matter might have played a big part. “In this study, we have uncovered a totally new mechanism that sparks the formation of massive black holes in particular dark-matter halos,” explained study lead author John Wise, an associate professor in the Center for Relativistic Astrophysics at the Georgia Institute of Technology. What did the scientists analyze? Until now it was believed that radiation from other galaxies led to the birth of black holes. The previous theories indicated that radiation used material from the formation of normal stars in order to incorporate it in black holes. “Instead of just considering radiation, we need to look at how quickly the halos grow,” Wise explained. “We don’t need that much physics to understand it — just how the dark matter is distributed and how gravity will affect that. Forming a massive black hole requires being in a rare region with an intense convergence of matter.” The took a look at simulations that presented the early stages of the universe’s evolution. They soon discovered that dark matter halos that had gas clouds, and no stars. The team used more simulations to analyze two of those halos. “It was only in these overly dense regions of the universe that we saw these black holes forming,” Wise said. “The dark matter creates most of the gravity, and then the gas falls into that gravitational potential, where it can form stars or a massive black hole.” Karen and her husband live on a plot of land in British Columbia. They aim to grow and raise a significant part of their food by maintaining a vegetable garden, keeping a flock of backyard chickens and foraging. They are also currently planning a move to a small cabin they hand built. Karen’s academic background in nutrition made her care deeply about real food and seek ways to obtain it. Thus sprung Anna’s interest in backyard gardening, chicken and goat keeping, recycling and self-sufficiency.
February 9, 2006 Astronomers using the 10-meter Keck II Telescope on Hawaii's Mauna Kea have refined the mutual orbit of asteroid 617 Patroclus and its companion. The pair is the only known binary object among the 1,900 asteroids the giant planet Jupiter shepherds around the Sun. Once astronomer Franck Marchis at the University of California, Berkeley, and his colleagues modeled the orbit, they could determine the density of the two asteroids. These asteroids, they say, have densities closer to those of comets than rocks. Patroclus and its companion, provisionally named Menoetius, are less dense than water, which means they're probably made of water ice and coated with a patina of dirt. "It's our suspicion that the Trojans are small Kuiper Belt objects," says Marchis. Trojan asteroids are those that lead or follow Jupiter by 60° in its orbit around the Sun. The gravity of Jupiter and the Sun balance at these locations, allowing objects to accumulate there. According to the Minor Planet Center in Cambridge, Massachusetts, astronomers have cataloged about 1,900 of these space rocks. They are relatively small and faint, which makes them difficult to study even with the world's largest ground-based telescopes. Astronomers found Patroclus in 1906. In October 2001, William Merline at the Southwest Research Institute in Boulder, Colorado, and his colleagues found its companion, but their observations weren't detailed enough to determine the components' orbit. Marchis observed the pair using the Keck II Telescope's Near-Infrared Camera (NIRC2) in November 2004 and May 2005. At the same time, Keck Observatory astronomers were commissioning a new adaptive-optics system that uses a sodium laser to create an artificial star. Monitoring the laser's star helps the telescope adapt to the blurring effects of Earth's atmosphere. "Before, we could only look at objects near a bright reference star, limiting the use of adaptive optics to a small percentage of the heavens," Marchis says. "Now, we can use adaptive optics to view almost any point on the sky." The system produced images clear enough to estimate the mutual orbit of Patroclus and its kin. Patroclus is about 76 miles (122 km) wide, while Menoetius is slightly smaller (70 miles, or 112 km). The two objects are separated by 423 miles (680 km) and circle their common center of mass every 4.3 days. The team calculates both objects have densities as low as 0.8 gram per cubic centimeter light enough to float in water. Both asteroids are named for heroes of Homer's Iliad, a tale of the Trojan War. Patroclus was named for Achilles' best friend; Menoetius was Patroclus' father. "This is the first time anyone has determined directly the density of a Trojan asteroid," says team member Daniel Hestroffer, an astronomer at the Paris Observatory's Institute of Celestial Mechanics and Ephemerides Calculation. The measurement appears to validate an idea put forward last year by Côte d'Azur Observatory's Alessandro Morbidelli and his coworkers. Morbidelli's team suspects Trojan asteroids formed in the Kuiper Belt beyond Neptune. Computer simulations suggest the giant gas planets Jupiter through Neptune migrated outward less than 1 billion years after they formed. A dense disk of icy planetesimals orbited at the edge of the planetary system; its remnant is today's Kuiper Belt. The presence of this massive disk forced the gas giants to move outward. When Saturn and Jupiter moved far enough outward that Jupiter orbited twice for every revolution of Saturn, drastic changes ensued. Uranus and Neptune careened into the disk, where they stirred up planetesimal orbits. Some of these small objects shot toward the Sun, perhaps creating the devastating round of impacts astronomers call the Late Heavy Bombardment. Others were ejected from the solar system altogether, or formed the Kuiper Belt we see today. Jupiter couldn't retain its Trojan asteroids during that event, says Morbidelli. But once its 2:1 relationship with Saturn ended and things settled down, any icy objects in Jupiter's vicinity were trapped. Later, Patroclus may have split in two after a too-close pass with the giant planet. "We need to discover more binary Trojans and observe them to see if low density is a characteristic of all Trojans," Marchis cautions. If it is, objects from the Kuiper Belt may be more accessible than scientists ever suspected.
Washing groundstone for pollen, starch, and phytoliths is a much more direct measure of plants that might have been processed, than collecting sediments under or next to groundstone. We have developed methods of washing groundstone to minimize recovery of post-depositional sediment, thus reducing the added background signature. If it is possible to wash groundstone or ceramic sherds or vessels to recover pollen, phytoliths, or starches that might represent plants processed, this should be done. Sediment samples can be collected as controls. Examples of pollen recovery from groundstone include maize/corn, beeweed, Cheno-ams, mustard family, cattail, and many other plants. Washing groundstone for evidence of food processing provides evidence not only that a particular plant was processed, but that grinding was part of the processing. To date we have examined relatively few groundstone for phytolith evidence of plant grinding. Calcium oxalate raphids produced by cattail roots also were recovered from groundstone. Other than maize/corn phytoliths and cattail raphids, no other examples of foods have been recovered yet. Starches should be "food for bacteria and other soil micro-organisms", but as with all things in nature, it is an imperfect system. Some of the starches simply survive. Starches provide a particularly good record of grinding roots/tubers because these foods do not leave seeds or pollen. When roots/tubers are collected when the plants are in flower, the flowers transport pollen to the processing area, which allows portions of the pollen record to represent collection and processing roots/tubers. However, when roots/tubers are not collected when the plants are in flower, there is no transport mechanism. Many starches survive our pollen extraction process, meaning that we can identify them when we see them in pollen samples. As a general rule, starches from roots/tubers have eccentric hila (that means their hilum, which often appears as a dark spot under the microscope) is off-center. Seeds, on the other hand, usually produce starches with centric hila. We have not observed Cheno-am starch in the record, although we have seen many maize/corn-type starches in groundstone wash samples. A cross-polar illuminator (or crossed nichols) are necessary to examine starches well enough to identify them. Some starches have a rather generic form, while others are specific to either genus or species. Many plants produce several different types of starches in a single organ, meaning that one must learn to identify populations of starches, rather than relying on single starches. We have noted starches in human tooth calculus, groundstone washes, ceramic washes, washes of Poverty Point Objects, floor samples, other sediment samples, and in nearly every type of provenience that we have examined for evidence of food processing.
Drilling in English Language Teaching This article is about drilling in English language teaching, a technique that is still being used by many teachers although it has been discredited in modern methods. The article will try to discuss the following points: - definition of drilling, - types of drills, - the reasons why drilling is now discredited, - drilling and fluency. What is drilling? Drilling refers to a type of audio lingual technique based on students repeating a model provided by the teacher. The focus is on accuracy rather than fluency. They are used to practice: This technique is still used by many teachers in many parts of the world although the theory -behaviorism- which is the basis of such a technique was discredited a long time ago. Types of drilling There are different types of drills: Repetition or imitation drills Basically a teacher says a model and the students repeat it. Prompt: I didn’t like the TV program, so I went to sleep. Response: I didn’t like the TV program, so I went to sleep. Teachers use substitution drills to practice structures or vocabulary items. The idea is to practice one or more words change during the drill. Prompt: Leila is a very beautiful girl (intelligent). Response: Leila is a very intelligent girl. Prompt: John is helpful (modest). Response: John is modest Question and answer drills Question and answer drills refer the use of questions as prompts. Students provide the answer in a very controlled way. Prompt: Is there a teacher in the classroom? Response: Yes, there is. Prompt: Are there any desks in the classroom? Response: Yes, there are. Prompt: What’s the matter? Response: I have a (backache). Prompt: What’s the matter? Response: I have a (toothache). Students are given a structure to be transformed. Prompt: Nancy made tea? Response: Tea was made by Nancy. Prompt: I like orange juice. She? Response: She likes orange juice. Prompt: New York is the capital of the USA. (not) Response: New York is not the capital of the USA. Teacher asks the whole class to repeat the model all together. What’s the problem with drills? Drills are not appreciated in modern methods because: - They are not meaningful. - Focus is on accuracy. - They are mechanical. - They don’t convey much meaning - They are decontextualized. - Drills help fix structures in memory only for a short period of time. Drills and fluency As it can be seen from the examples above, drills focus on accuracy and are mechanical. However, many teachers, think that drills may have some advantages in ELT, especially if the focus is shifted from accuracy to fluency. Drills may be exploited in learner-centered activities to help students gain fluency. In such fluency-based drills, students may have a chance to try and say things without hesitations, at the right speed, and without undue pauses. To reach that objective teachers provide short formulaic language (or chunks) for students to practice. But instead of repeating these chunks meaninglessly, students have to be given a context and enough time to process and internalize these chunks at their own pace and using their own strategies. A kind of “mumble drill” or “mutter drill”: whereby learners repeat under their breath (i.e. sub-vocalise) the targeted segment, in their own time, so as to get some kind of ownership of it. Yes, drills can be made more meaningful. For instance, giving students choices in their replies to prompts may provide more freedom and creativity. If you allow students to choose from different options, this means that they have to think before they answer. Drills mustn’t provide more control than is necessary (although they are by definition techniques that exert some control over students’ production to minimize errors). This is an example of a meaningful drill to practice the modal should: Student 1: I’ve got a bad toothache. Student 2: you should see a dentist. Student 3: you should brush your teeth regularly. Student 4: you shouldn’t eat candies. Here is another example to practice could: Prompt: I’m so bored. Response 1: You could watch a movie. Response 2: You could go jogging. Response 3: You could hang out with your friends. Response 4: You could go to a the theater. Response 5: You could listen to your favorite music. Response 6: You could read a book. The above exchange is more meaningful because responses are unpredictable and they give students opportunity for some creativity in spite of the controlled aspect of the drill. Chain drills can be also made more meaningful by personalizing them: Student 1: My name is Ann, and I mad about watching TV. What about you? Student 2: My name is Clara, and I love surfing. And you? Student 3: My name is John, and I like reading. What about you? Student 4: My name is Lisa, and I am crazy about playing the guitar. And you? Student 6: My name is Alan, and I am fond of …… And you? Of course these drills may be made more challenging according to the level of the students. In a nutshell, over drilling structures and vocabulary items may not be helpful in language teaching. Drills must be integrated in meaningful activities if they are to be of any use. Accuracy-based drills that focus on meaningless repetition have been discredited since the advent of communicative language teaching. Nowadays, the role of controlled oral practice is being reconsidered. The idea is to make such practice more communicative; the aim is to reach fluency and natural communication.
The square or rectangular shaped diode(s), used to collect the light striking the image during the exposure, is referred to as the image sensor. The CCD (Charged Coupled Device) is one of the most popular types of digital image sensors (imaging chip) used in digital cameras. Another type is the CMOS (Complementary Metal Oxide Semiconductor) which is a newer technology than CCDs and is also becoming CCD (Charged Coupled Device) The square or rectangular shaped diodes are the current or most often used CCD types. Newer types use octagonal-shaped diodes, which can be configured into more diodes per inch resulting in a more detailed image. The image sensor changes the light it senses into numbers or data that represent different levels of brightness. The sensor measures the level of red, green, and blue and makes a color interpolation, assigning values to each image pixel. The CCD may produce 4 MB of color data which, when interpolated (pixels are added) increases to 12 MB of data, becoming a 12 MB image file. To capture an image, digital cameras use the CCD technologies single-pass (one shot), 3-pass, or 4-pass, and scanning processes, several of which may be selectable options on one camera. - Single pass captures an image with one exposure and is best used for action shots or any images in which movement occurs. The resolution is most often - 3 and 4-shot exposure provide higher resolution and are best used for - A standard 3 or 4-pass exposure scans the image for red, green, and blue colors (RGB processing). The 4-pass process will scan green twice in order to separate the component colors correctly. Since the images are shot 3 or 4 times (one each for red, and blue, and one or two for green) there must be no movement or the image will be blurred. - Scanning exposure, most often found on camera backs, creates the largest file size with the highest resolution. Images of products or any non-moving subjects that will be enlarged are best produced with scanning technology. - Cameras using scanning technology do not interpolate color information, since they contain rows of sensors (one red, one green, an one blue) which collect the color information on the entire image as it is scanned line CMOS (Complementary Metal Oxide Semiconductor) The CMOS is a widely used type of semiconductor which is used as an imaging sensor in digital cameras. It uses both negative and positive polarity circuits, with only one of the circuit types on at any time. This configuration allows the CMOS to use less power than CCD technology. The chips are well suited for devices that are battery powered such as digital cameras and portable computers because the lower power consumption provides more operational time. Battery powered CMOS memory is also used in personal computers to maintain the date and the time and the system setup commands after the main power source has been switched off. The CMOS chip is known as a "camera on a chip" because of the advantages it has over CCD technology. CCDs require several support chips to function, are more expensive to produce, and require more power than CMOS. CCDs are still the best choice for applications requiring the highest level of quality because of the high resolution and high definition that they provide, but CMOS technology is improving and are becoming much more common in lower cost
This page contains information to support educators and families in teaching K-3 students about geography, continents, and oceans. The information is designed to complement the BrainPOP Jr. movie Continents and Oceans. It explains the type of content covered in the movie, provides ideas for how teachers and parents can develop related understandings, and suggests how other BrainPOP Jr. resources can be used to scaffold and extend student learning. The world is a big place! Help children develop a better understanding of geography and learn about the world around them. In this movie, children will learn about Earth’s continents: North America, South America, Europe, Asia, Africa, Australia, and Antarctica. Children will learn a few key details about each continent, including their location on a map. They will also explore our planet’s five oceans: the Atlantic, Pacific, Indian, Southern, and Arctic Oceans. We recommend looking at maps together and helping children find their own location and other places of interest around the world. You may want to screen the Reading Maps movie as a review. Remind children that a continent is one of Earth’s large landmasses. It is important to note that some cultures divide the continents differently. For example, some people group North and South America as one continent because it is separated only by an isthmus; others consider Europe and Asia as one continent since it is one land mass divided by the Ural Mountain range. But many people feel that the enormous historical and cultural differences between the Americas, and between Europe and Asia, justify their separation into distinct continents. Most people agree that there are five main oceans in the world. The Arctic Ocean is in the far north, and the Southern Ocean surrounds Antarctica. The Pacific Ocean is to the west of the North and South Americas, while the Atlantic Ocean is to the east. The Indian Ocean is bordered by Africa, Asia, and Australia. Remind children that there are other smaller bodies of water, such as the Caribbean Sea and the Mediterranean Sea. Look at a map together and point out other bodies of water. Are any bodies of water near your school visible on the map? Find North America on a map. Remind children that North America includes the United States of America, Canada, and Mexico. It also includes Central America, which is the long, narrow part of the continent that connects to South America. Discuss different landforms in North America together. You may want to review the Landforms movie together. Remind children that plains are wide, flat areas of land that often have rich soil. The Great Plains covers over 500,000 square miles of the central United States and Canada. The Rocky Mountains also covers parts of the United States and Canada—they stretch over 3,000 miles from the southern part of British Columbia in Canada to New Mexico in the United States. What are some other noteworthy landmarks in North America? Discuss with children and look at their location on a map. Remind children that the Equator is an imaginary line that goes around the middle of the Earth. Most of South America lies in the southern hemisphere, the area below the Equator. The Amazon rainforest is in South America and it is the largest rainforest in the world. You may want to screen the Rainforests movie as an extension, and highlight differences between North and South American climates and rainforests. South America is also home to the longest mountain range in the world, the Andes Mountains. This mountain range is over 4,000 miles long and extends across seven countries. Show a map of Africa and point out that parts of Africa lie in the northern hemisphere and other parts lie in the southern hemisphere. The largest desert in the world is the Sahara Desert and it is in Africa. This desert covers nearly 3,700,000 square miles and is almost as large as the entire United States. Africa is also home to the longest river in the world, the Nile River. Help children understand that people have been relying on the river for thousands of years, not only for drinking water but for food and transport. You may want to view the Ancient Egypt movie as an extension. Many children are familiar with animals such as giraffes, elephants, zebras, lions, cheetahs, and hippos. These are animals that are native to Africa and in some cases the only places where they are found in the wild. There are about fifty countries in Europe, but twenty-seven of them have come together to form the European Union to share resources and exchange in commerce more easily. The Alps are a mountain range that stretches across parts of Europe. In northern Europe there are fjords, which are long, narrow inlets with steep sides. Fjords are created by glaciers, or large, slow-moving bodies of ice that cut large valleys. Asia is the world’s largest continent and the most populated. About 60% of the world’s population lives in Asia. The world’s tallest mountain, Mount Everest, is in Asia, on the border between Nepal and Tibet. Mount Everest is nearly 30,000 feet high. Asia is also home to the lowest place on Earth, the Dead Sea, which is a salt lake on the border between Israel and Jordan. The Dead Sea is one of the saltiest bodies of water—over 8 times saltier than the oceans—and is about 1,385 feet (422 meters) below sea level. Australia is the smallest continent. Help children understand that Australia is not only a continent, but also a country! Australia is entirely in the southern hemisphere, which is why people call it the land “down under.” The Outback is the remote, arid region of Australia that is far from urban areas. But, Australia is also home to rainforests and the Great Barrier Reef, which is the largest reef system in the world and the largest structure made by living organisms, the coral polyps. It can even be seen from space! You may want to view our Ocean Habitats movie to learn more. Antarctica is the southernmost continent and it is where the South Pole is located. Help children understand that the continent is cold and windy and frozen in ice all year long—even in the summer. Although it is not hot like the Sahara, Antarctica is still considered a desert because its maximum rainfall is approximately eight inches along the coasts, with even less inland. There are no permanent residents in Antarctica, but scientists do visit there for research. Understanding about the different continents and oceans helps children build a better understanding about the world around them. Introduce them to places and cultures beyond their everyday experiences, and teach them their role as responsible global citizens.