content
stringlengths 275
370k
|
---|
What You Need:
- White watercolor paper, 8.5" x 11"
- Variety of colored construction paper
- Glue Stick
- Oil pastels, color pencils, or crayons
What You Do:
- Look at tons of photos on the Internet of buildings in cities from around the world with your child. Have her tell you all the shapes she sees and if the buildings are standing alone or overlapping other buildings. Also show examples of how buildings can be smaller in the distance and how some buildings are so big, that you only see parts of them hidden behind other architecture. Different architectural cities could be: Denver, Frankfurt, Seattle or Dubais.
- Have her draw out different sizes of rectangles and squares on the colored papers using a ruler and then cut them out. These will be the basis of all buildings.
- Using a ruler or freehand, she can cut out a variety of shapes that will fit on top of the buildings such as domes, triangles, arches and circles. These should also be cut out.
- She can now lay all the shapes out on the white paper figuring out how she will plan her city. Encourage her to overlap buildings and alter the shapes of the rooftops. The buildings should span the entire width of the paper.
- With everything in place, she can now carefully glue all the structures in their places.
- Once glued, she can draw in the sky, windows and any other details she would like on the buildings.
- Hang up her finished city skyline.
Encourage her to create different cities or fantasy-scapes, including ones that may exist underwater or in outer space! |
Charges in Electromagnetic Fields includes the following learning objectives:
Understand that a moving charge will feel a force within a magnetic field;
Recall and use F = qvBsin? for a moving charge in a magnetic field;
Predict the direction and relative sizes of circular paths of moving charges;
Describe the deflection of a beam of charged particles as they pass through both an electric and magnetic field;
Explain how electric and magnetic fields can be used in velocity selection;
Describe one method for the determination of the mass of an electron.
Showing all 6 results |
Theory of mind
Theory of Mind (often abbreviated ToM) is the ability to attribute mental states — beliefs, intents, desires, pretending, knowledge, etc. — to oneself and others and to understand that others have beliefs, desires, intentions, and perspectives that are different from one's own. Deficits can occur in people with autism spectrum disorders, schizophrenia, attention deficit hyperactivity disorder, as well as alcoholics who have suffered brain damage due to alcohol's neurotoxicity. Although philosophical approaches to this exist, the theory of mind as such is distinct from the philosophy of mind.
- 1 Definition
- 2 Philosophical and psychological roots
- 3 Development
- 4 Empirical investigation
- 5 Deficits
- 6 Brain mechanisms
- 7 Non-human
- 8 See also
- 9 Notes
- 10 References
- 11 External links
Theory of mind is a theory insofar as the mind is not directly observable. The presumption that others have a mind is termed a theory of mind because each human can only intuit the existence of their own mind through introspection, and no one has direct access to the mind of another. It is typically assumed that others have minds by analogy with one's own, and this assumption is based on the reciprocal, social interaction, as observed in joint attention, the functional use of language, and the understanding of others' emotions and actions. Having a theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. As originally defined, it enables one to understand that mental states can be the cause of—and thus be used to explain and predict—the behavior of others. Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, that one must be able to conceive of the mind as a "generator of representations". If a person does not have a complete theory of mind it may be a sign of cognitive or developmental impairment.
Theory of mind appears to be an innate potential ability in humans: one requiring social and other experience over many years for its full development. Different people may develop more, or less, effective theories of mind. Empathy is a related concept, meaning the recognition and understanding of the states of mind of others, including their beliefs, desires and particularly emotions. This is often characterized as the ability to "put oneself into another's shoes". Recent neuro ethological studies of animal behaviour suggest that even rodents may exhibit ethical or empathetic abilities. Neo-Piagetian theories of cognitive development maintain that theory of mind is a byproduct of a broader hypercognitive ability of the human mind to register, monitor, and represent its own functioning.
Research on theory of mind, in humans and animals, adults and children, normally and atypically developing, has grown rapidly in the 35 years since Premack and Woodruff's paper, "Does the chimpanzee have a theory of mind?". The emerging field of social neuroscience has also begun to address this debate, by imaging the brains of humans while they perform tasks demanding the understanding of an intention, belief or other mental state in others.
An alternative account of theory of mind is given within operant psychology and provides significant empirical evidence for a functional account of both perspective taking and empathy. The most developed operant approach is founded on research on derived relational responding and is subsumed within what is called, "Relational Frame Theory". According to this view empathy and perspective taking comprise a complex set of derived relational abilities based on learning to discriminate and respond verbally to ever more complex relations between self, others, place, and time, and the transformation of function through established relations.
Philosophical and psychological roots
Contemporary discussions of ToM have their roots in philosophical debate—most broadly, from the time of Descartes' Second Meditation, which set the groundwork for considering the science of the mind. Most prominent recently are two contrasting approaches in the philosophical literature, to theory of mind: theory-theory and simulation theory. The theory-theorist imagines a veritable theory—"folk psychology"—used to reason about others' minds. The theory is developed automatically and innately, though instantiated through social interactions. It is also closely related to person perception and attribution theory from social psychology.
The intuitive assumption that others are minded is an apparent tendency we all share. We anthropomorphize non-human animals, inanimate objects, and even natural phenomena. Daniel Dennett referred to this tendency as taking an "intentional stance" toward things: we assume they have intentions, to help predict future behavior. However, there is an important distinction between taking an "intentional stance" toward something and entering a "shared world" with it. The intentional stance is a detached and functional theory we resort to during interpersonal interactions. A shared world is directly, perceived and its existence structures reality itself for the perceiver. It is not just automatically applied to perception; it in many ways constitutes perception.
The philosophical roots of the Relational Frame Theory account of ToM arise from contextual psychology and refer to the study of organisms (both human and non-human) interacting in and with a historical and current situational context. It is an approach based on contextualism, a philosophy in which any event is interpreted as an ongoing act inseparable from its current and historical context and in which a radically functional approach to truth and meaning is adopted. As a variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This scientific form of contextual psychology is virtually synonymous with the philosophy of operant psychology.
The study of which animals are capable of attributing knowledge and mental states to others, as well the development of this ability in human ontogeny and phylogeny, has identified several behavioral precursors to a theory of mind. Understanding attention, understanding of others' intentions, and imitative experience with other people are hallmarks of a theory of mind that may be observed early in the development of what later becomes a full-fledged theory. In studies with non-human animals and pre-verbal humans, in particular, researchers look to these behaviors preferentially in making inferences about mind.
Simon Baron-Cohen identified the infant's understanding of attention in others, a social skill found by 7 to 9 months of age, as a "critical precursor" to the development of theory of mind. Understanding attention involves understanding that seeing can be directed selectively as attention, that the looker assesses the seen object as "of interest", and that seeing can induce beliefs. Attention can be directed and shared by the act of pointing, a joint attention behavior that requires taking into account another person's mental state, particularly whether the person notices an object or finds it of interest. Baron-Cohen speculates that the inclination to spontaneously reference an object in the world as of interest ("protodeclarative pointing") and to likewise appreciate the directed attention and interests of another may be the underlying motive behind all human communication.
Understanding of others' intentions is another critical precursor to understanding other minds because intentionality, or "aboutness", is a fundamental feature of mental states and events. The "intentional stance" has been defined by Daniel Dennett as an understanding that others' actions are goal-directed and arise from particular beliefs or desires. Both 2- and 3-year-old children could discriminate when an experimenter intentionally vs. accidentally marked a box as baited with stickers. Even earlier in ontogeny, Andrew N. Meltzoff found that 18-month-old infants could perform target manipulations that adult experimenters attempted and failed, suggesting the infants could represent the object-manipulating behavior of adults as involving goals and intentions. While attribution of intention (the box-marking) and knowledge (false-belief tasks) is investigated in young humans and nonhuman animals to detect precursors to a theory of mind, Gagliardi et al. have pointed out that even adult humans do not always act in a way consistent with an attributional perspective. In the experiment, adult human subjects made choices about baited containers when guided by confederates who could not see (and therefore, not know) which container was baited.
Recent research in developmental psychology suggests that the infant's ability to imitate others lies at the origins of both a theory of mind and other social-cognitive achievements like perspective-taking and empathy. According to Meltzoff, the infant's innate understanding that others are "like me" allows it to recognize the equivalence between the physical and mental states apparent in others and those felt by the self. For example, the infant uses his own experiences orienting his head/eyes toward an object of interest to understand the movements of others who turn toward an object, that is, that they will generally attend to objects of interest or significance. Some researchers in comparative disciplines have hesitated to put a too-ponderous weight on imitation as a critical precursor to advanced human social-cognitive skills like mentalizing and empathizing, especially if true imitation is no longer employed by adults. A test of imitation by Alexandra Horowitz found that adult subjects imitated an experimenter demonstrating a novel task far less closely than children did. Horowitz points out that the precise psychological state underlying imitation is unclear and cannot, by itself, be used to draw conclusions about the mental states of humans.
Whether children younger than 3 or 4 years old may have a theory of mind is a topic of debate among researchers. It is a challenging question, due to the difficulty of assessing what pre-linguistic children understand about others and the world. Tasks used in research into the development of ToM must take into account the umwelt—(the German word 'Umwelt' means "environment" or "surrounding world")—of the pre-verbal child.[clarification needed]
One of the most important milestones in theory of mind development is gaining the ability to attribute false belief: that is, to recognize that others can have beliefs about the world that are diverging. To do this, it is suggested, one must understand how knowledge is formed, that people's beliefs are based on their knowledge, that mental states can differ from reality, and that people’s behavior can be predicted by their mental states. Numerous versions of the false-belief task have been developed, based on the initial task done by Wimmer and Perner (1983).
In the most common version of the false-belief task (often called the "'Sally-Anne' test" or "'Sally-Anne' task"), children are told or shown a story involving two characters. For example, the child is shown two dolls, Sally and Anne, who have a basket and a box, respectively. Sally also has a marble, which she places into her basket, and then leaves the room. While she is out of the room, Anne takes the marble from the basket and puts it into the box. Sally returns, and the child is then asked where Sally will look for the marble. The child passes the task if she answers that Sally will look in the basket, where she put the marble; the child fails the task if she answers that Sally will look in the box, where the child knows the marble is hidden, even though Sally cannot know this, since she did not see it hidden there. To pass the task, the child must be able to understand that another’s mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding.
Another example is when a boy leaves chocolate on a shelf and then leaves the room. His mother puts it in the fridge. To pass the task, the child must understand that the boy upon returning holds the false belief that his chocolate is still on the shelf.
The results of research using false-belief tasks have been fairly consistent: most normally developing children are able to pass the tasks until around age four. Notably, while most children, including those with Down syndrome, are able to pass this test, in one study, 80% of children diagnosed with autism were unable to do so.
Also adults can experience problems with false beliefs, for instance when they show hindsight bias, defined as: “the inclination to see events that have already happened as being more predictable than they were before they took place.” For instance, in an experiment by Fischhoff in 1975, adult subjects who were asked for an independent assessment were unable to disregard information on actual outcome. Also in experiments with complicated situations, when assessing others' thinking, adults can be unable to disregard certain information that they have been given.
Other tasks have been developed to try to solve the problems inherent in the false-belief task. In the "Unexpected contents", or "Smarties" task, experimenters ask children what they believe to be the contents of a box that looks as though it holds a candy called "Smarties". After the child guesses (usually) "Smarties", it is shown that the box in fact contained pencils. The experimenter then re-closes the box and asks the child what she thinks another person, who has not been shown the true contents of the box, will think is inside. The child passes the task if he/she responds that another person will think that "Smarties" exist in the box, but fails the task if she responds that another person will think that the box contains pencils. Gopnik & Astington (1988) found that children pass this test at age four or five years.
The "false-photograph" task is another task that serves as a measure of theory of mind development. In this task, children must reason about what is represented in a photograph that differs from the current state of affairs. Within the false-photograph task, either a location or identity change exists. In the location-change task, the examiner puts an object in one location (e.g., chocolate in an open green cupboard), whereupon the child takes a Polaroid photograph of the scene. While the photograph is developing, the examiner moves the object to a different location (e.g., a blue cupboard), allowing the child to view the examiner's action. The examiner asks the child two control questions: "When we first took the picture, where was the object?" and "Where is the object now?". The subject is also asked a "false-photograph" question: "Where is the object in the picture?" The child passes the task if he/she correctly identifies the location of the object in the picture and the actual location of the object at the time of the question. However, the last question might be misinterpreted as: "Where in this room is the object that the picture depicts?" and therefore some examiners use an alternative phrasing.
To make it easier for animals, young children, and individuals with classical (Kanner-type) autism to understand and perform theory-of-mind tasks, researchers have developed tests in which verbal communication is de-emphasized: some whose administration does not involve verbal communication on the part of the examiner, some whose successful completion does not require verbal communication on the part of the subject, and some that meet both of the foregoing standards. One category of tasks uses a preferential looking paradigm, with looking time as the dependent variable. For instance, 9-month-old infants prefer looking at behaviors performed by a human hand over those made by an inanimate hand-like object. Other paradigms look at rates of imitative behavior, the ability to replicate and complete unfinished goal-directed acts, and rates of pretend play.
Recent research on the early precursors of theory of mind have looked at innovative ways at capturing prelinguistic infants' understanding of other people's mental states, including perception and beliefs. Using a variety of experimental procedures, studies have shown that infants in their second year of life have an implicit understanding what other people see and what they know. A popular paradigm used to study infants' theory of mind is the violation of expectation procedure, which predicates on infants' tendency to look longer at unexpected and surprising events compared to familiar and expected events. Therefore, their looking times measures would give researchers an indication of what infants might be inferring, or their implicit understanding of events. One recent study using this paradigm found that 16-month-olds tend to attribute beliefs to a person whose visual perception was previously witnessed as being "reliable" compared to someone whose visual perception was "unreliable". Specifically, 16-month-olds were trained to expect a person's excited vocalization and gaze into a container to be associated with finding a toy in the reliable looker condition or an absence of a toy in the unreliable looker condition. Following this training phase, infants witnessed, in an object-search task, the same person either searching for a toy in the correct or incorrect location after they both witnessed the location of where the toy was hidden. Infants who experienced the reliable looker were surprised and therefore looked longer when the person searched for the toy in the incorrect location compared to the correct location. In contrast, the looking time for infants who experienced the unreliable looker did not differ for either search locations. These findings suggest that 16-month-old infants can differentially attribute beliefs about a toy's location based on the person's prior record of visual perception.
The theory of mind (ToM) impairment describes a difficulty someone would have with perspective taking. This is also sometimes referred to as mind-blindness. This means that individuals with a ToM impairment would have a difficult time seeing phenomena from any other perspective than their own. Individuals who experience a theory of mind deficit have difficulty determining the intentions of others, lack understanding of how their behavior affects others, and have a difficult time with social reciprocity. ToM deficits have been observed in people with autism spectrum disorders, people with schizophrenia, people with Non-Verbal Learning Disorder, people with attention deficit disorder, persons under the influence of alcohol and narcotics, sleep-deprived persons, and persons who are experiencing severe emotional or physical pain.
In 1985 Simon Baron-Cohen, Alan M. Leslie and Uta Frith suggested that children with autism do not employ a theory of mind, and suggested that children with autism have particular difficulties with tasks requiring the child to understand another person's beliefs. These difficulties persist when children are matched for verbal skills and have been taken as a key feature of autism.
Many individuals classified as having autism have severe difficulty assigning mental states to others, and they seem to lack theory of mind capabilities. Researchers who study the relationship between autism and theory of mind attempt to explain the connection in a variety of ways. One account assumes that theory of mind plays a role in the attribution of mental states to others and in childhood pretend play. According to Leslie, theory of mind is the capacity to mentally represent thoughts, beliefs, and desires, regardless of whether or not the circumstances involved are real. This might explain why individuals with autism show extreme deficits in both theory of mind and pretend play. However, Hobson proposes a social-affective justification, which suggests that a person with autism deficits in theory of mind result from a distortion in understanding and responding to emotions. He suggests that typically developing human beings, unlike individuals with autism, are born with a set of skills (such as social referencing ability) that later lets them comprehend and react to other people’s feelings. Other scholars emphasize that autism involves a specific developmental delay, so that children with the impairment vary in their deficiencies, because they experience difficulty in different stages of growth. Very early setbacks can alter proper advancement of joint-attention behaviors, which may lead to a failure to form a full theory of mind.
It has been speculated that ToM exists on a continuum as opposed to the traditional view of a discrete presence or absence. While some research has suggested that some autistic populations are unable to attribute mental states to others, recent evidence points to the possibility of coping mechanisms that facilitate a spectrum of mindful behavior. Tine et al. suggest that children with autism score substantially lower on measures of social theory of mind in comparison to children with Asperger syndrome.
Individuals with the diagnosis of schizophrenia can show deficits in theory of mind. Mirjam Sprong and colleagues investigated the impairment by examining 29 different studies, with a total of over 1500 participants (All on medications that affect the mind) . This meta-analysis showed significant and stable deficit of theory of mind in people with schizophrenia. They performed poorly on false-belief tasks, which test the ability to understand that others can hold false beliefs about events in the world, and also on intention-inference tasks, which assess the ability to infer a character’s intention from reading a short story. Schizophrenia patients with negative symptoms, such as lack of emotion, motivation, or speech, have the most impairment in theory of mind and are unable to represent the mental states of themselves and of others. Paranoid schizophrenic patients also perform poorly because they have difficulty accurately interpreting others’ intentions. The meta-analysis additionally showed that IQ, gender, and age of the participants does not significantly affect the performance of theory of mind tasks. The circular logic of medications that affect the mind, that produce symptoms of schizophrenia is not questioned.
Current research suggests that impairment in theory of mind negatively affects clinical insight, the patient’s awareness of their mental illness. Insight requires theory of mind—a patient must be able to adopt a third-person perspective and see the self as others do. A patient with good insight would be able to accurately self-represent, by comparing oneself with others and by viewing oneself from the perspective of others. Insight allows a patient to recognize and react appropriately to their symptoms; however, a patient who lacks insight would not realize that he has a mental illness, because of their inability to accurately self-represent. Therapies that teach patients perspective-taking and self-reflection skills can improve abilities in reading social cues and taking the perspective of another person.
The majority of the current literature supports the argument that the theory of mind deficit is a stable trait-characteristic rather than a state-characteristic of schizophrenia. The meta-analysis conducted by Sprong et al. showed that patients in remission still had impairment in theory of mind. The results indicate that the deficit is not merely a consequence of the active phase of schizophrenia.
Schizophrenic patients' deficit in theory of mind impairs their daily interactions with others. An example of a disrupted interaction is one between a schizophrenic parent and a child. Theory of mind is particularly important for parents, who must understand the thoughts and behaviors of their children and react accordingly. Dysfunctional parenting is associated with deficits in the first-order theory of mind, the ability to understand another person's thoughts, and the second-order theory of mind, the ability to infer what one person thinks about another person's thoughts. Compared with healthy mothers, mothers with schizophrenia are found to be more remote, quiet, self-absorbed, insensitive, unresponsive, and to have fewer satisfying interactions with their children. They also tend to misinterpret their children’s emotional cues, and often misunderstand neutral faces as negative. Activities such as role-playing and individual or group-based sessions are effective interventions that help the parents improve on perspective-taking and theory of mind. Although there is a strong association between theory of mind deficit and parental role dysfunction, future studies could strengthen the relationship by possibly establishing a causal role of theory of mind on parenting abilities.
The neurotoxic effects of neuroleptic medications on the brain of schizophrenic patients is ignored.
Alcohol use disorders
Impairments in theory of mind, as well as other social-cognitive deficits are commonly found in people suffering from alcoholism due to the neurotoxic effects of alcohol on the brain, particularly the prefrontal cortex region of the brain.
Depression and dysphoria
Individuals in a current major depressive episode (MDD), a disorder characterized by social impairment, show deficits in theory of mind decoding. Theory of mind decoding is the ability to use information available in the immediate environment (e.g., facial expression, tone of voice, body posture) to accurately label the mental states of others. The opposite pattern, enhanced theory of mind, is observed in individuals vulnerable to depression, including those individuals with past MDD, dysphoric individuals, and individuals with a maternal history of MDD.
In typically developing humans
Research on theory of mind in autism led to the view that mentalizing abilities are subserved by dedicated mechanisms that can (in some cases) be impaired while general cognitive function remains largely intact. Neuroimaging research has supported this view, demonstrating specific brain regions consistently engaged during theory of mind tasks. Early PET research on theory of mind, using verbal and pictorial story comprehension tasks, identified a set of regions including the medial prefrontal cortex (mPFC), and area around posterior superior temporal sulcus (pSTS), and sometimes precuneus and amygdala/temporopolar cortex. Subsequently, research on the neural basis of theory of mind has diversified, with separate lines of research focused on the understanding of beliefs, intentions, and more complex properties of minds such as psychological traits.
Studies from Rebecca Saxe's lab at MIT, using a false belief versus false photograph task contrast aimed to isolate the mentalizing component of the false belief task, have very consistently found activation in mPFC, precuneus, and temporo-parietal junction (TPJ), right-lateralized. In particular, it has been proposed that the right TPJ (rTPJ) is selectively involved in representing the beliefs of others. However, some debate exists, as some scientists have noted that the same rTPJ region has been consistently activated during spatial reorienting of visual attention; Jean Decety from the University of Chicago and Jason Mitchell from Harvard have thus proposed that the rTPJ subserves a more general function involved in both false belief understanding and attentional reorienting, rather than a mechanism specialized for social cognition. However, it is possible that the observation of overlapping regions for representing beliefs and attentional reorienting may simply be due to adjacent but distinct neuronal populations that code for each. The resolution of typical fMRI studies may not be good enough to show that distinct/adjacent neuronal populations code for each of these processes. In a study following Decety and Mitchell, Saxe and colleagues used higher-resolution fMRI and showed that the peak of activation for attentional reorienting is approximately 6-10mm above the peak for representing beliefs. Further corroborating that differing populations of neurons may code for each process, they found no similarity in the patterning of fMRI response across space.
Functional imaging has also been used to study the detection of mental state information in Heider-Simmel-esque animations of moving geometric shapes, which typical humans automatically perceive as social interactions laden with intention and emotion. Three studies found remarkably similar patterns of activation during the perception of such animations versus a random or deterministic motion control: mPFC, pSTS, fusiform face area (FFA), and amygdala were selectively engaged during the ToM condition. Another study presented subjects with an animation of two dots moving with a parameterized degree of intentionality (quantifying the extent to which the dots chased each other), and found that pSTS activation correlated with this parameter.
A separate body of research has implicated the posterior superior temporal sulcus in the perception of intentionality in human action; this area is also involved in perceiving biological motion, including body, eye, mouth, and point-light display motion. One study found increased pSTS activation while watching a human lift his hand versus having his hand pushed up by a piston (intentional versus unintentional action). Several studies have found increased pSTS activation when subjects perceive a human action that is incongruent with the action expected from the actor’s context and inferred intention: for instance, a human performing a reach-to-grasp motion on empty space next to an object, versus grasping the object; a human shifting eye gaze toward empty space next to a checkerboard target versus shifting gaze toward the target; an unladen human turning on a light with his knee, versus turning on a light with his knee while carrying a pile of books; and a walking human pausing as he passes behind a bookshelf, versus walking at a constant speed. In these studies, actions in the "congruent" case have a straightforward goal, and are easy to explain in terms of the actor’s intention; the incongruent actions, on the other hand, require further explanation (why would someone twist empty space next to a gear?), and apparently demand more processing in the STS. Note that this region is distinct from the temporo-parietal area activated during false belief tasks. Also note that pSTS activation in most of the above studies was largely right-lateralized, following the general trend in neuroimaging studies of social cognition and perception: also right-lateralized are the TPJ activation during false belief tasks, the STS response to biological motion, and the FFA response to faces.
Neuropsychological evidence has provided support for neuroimaging results on the neural basis of theory of mind. Studies with patients suffering from a lesion of the frontal lobes and the temporoparietal junction of the brain (between the temporal lobe and parietal lobe) reported that they have difficulty with some theory of mind tasks. This shows that theory of mind abilities are associated with specific parts of the human brain. However, the fact that the medial prefrontal cortex and temporoparietal junction are necessary for theory of mind tasks does not imply that these regions are specific to that function. TPJ and mPFC may subserve more general functions necessary for ToM.
Research by Vittorio Gallese, Luciano Fadiga and Giacomo Rizzolatti (reviewed in) has shown that some sensorimotor neurons, which are referred to as mirror neurons, first discovered in the premotor cortex of rhesus monkeys, may be involved in action understanding. Single-electrode recording revealed that these neurons fired when a monkey performed an action and when the monkey viewed another agent carrying out the same task. Similarly, fMRI studies with human participants have shown brain regions (assumed to contain mirror neurons) are active when one person sees another person's goal-directed action. These data have led some authors to suggest that mirror neurons may provide the basis for theory of mind in the brain, and to support simulation theory of mind reading (see above).
However, there is also evidence against the link between mirror neurons and theory of mind. First, macaque monkeys have mirror neurons but do not seem to have a 'human-like' capacity to understand theory of mind and belief. Second, fMRI studies of theory of mind typically report activation in the mPFC, temporal poles and TPJ or STS, but these brain areas are not part of the mirror neuron system. Some investigators, like developmental psychologist Andrew Meltzoff and neuroscientist Jean Decety, believe that mirror neurons merely facilitate learning through imitation and may provide a precursor to the development of ToM. Others, like philosopher Shaun Gallagher, suggest that mirror-neuron activation, on a number of counts, fails to meet the definition of simulation as proposed by the simulation theory of mindreading.
That being said, in a recent paper, Keren Haroush and Ziv Williams outline the case for a group of neurons in the primate brain that uniquely predicted the choice selection of their interacting partner. These neurons, located in the anterior cingulate cortex of rhesus monkeys, were observed using single-unit recording while the monkeys played a variant of the iterative prisoner's dilemma game. By identifying cells that represent the yet unknown intentions of a game partner, this study supports the idea that Theory of Mind may be a fundamental and generalized process, and suggests that anterior cingulate cortex neurons may potentially act to complement the function of mirror neurons during social interchange.
Several neuroimaging studies have looked at the neural basis theory of mind impairment in subjects with Asperger syndrome and high-functioning autism (HFA). The first PET study of theory of mind in autism (also the first neuroimaging study using a task-induced activation paradigm in autism) employed a story comprehension task, replicating a prior study in normal individuals. This study found displaced and diminished mPFC activation in subjects with autism. However, because the study used only six subjects with autism, and because the spatial resolution of PET imaging is relatively poor, these results should be considered preliminary.
A subsequent fMRI study scanned normally developing adults and adults with HFA while performing a "reading the mind in the eyes" task—viewing a photo of a human’s eyes and choosing which of two adjectives better describes the person’s mental state, versus a gender discrimination control. The authors found activity in orbitofrontal cortex, STS, and amygdala in normal subjects, and found no amygdala activation and abnormal STS activation in subjects with autism.
A more recent PET study looked at brain activity in individuals with HFA and Asperger syndrome while viewing Heider-Simmel animations (see above) versus a random motion control. In contrast to normally developing subjects, those with autism showed no STS or FFA activation, and significantly less mPFC and amygdala activation. Activity in extrastriate regions V3 and LO was identical across the two groups, suggesting intact lower-level visual processing in the subjects with autism. The study also reported significantly less functional connectivity between STS and V3 in the autism group. Note, however, that decreased temporal correlation between activity in STS and V3 would be expected simply from the lack of an evoked response in STS to intent-laden animations in subjects with autism; a more informative analysis would be to compute functional connectivity after regressing out evoked responses from all time series.
A subsequent study, using the incongruent/congruent gaze shift paradigm described above, found that in high-functioning adults with autism, posterior STS (pSTS) activation was undifferentiated while watching a human shift gaze toward a target and toward adjacent empty space. The lack of additional STS processing in the incongruent state may suggest that these subjects fail to form an expectation of what the actor should do given contextual information, or that information about the violation of this expectation doesn’t reach STS; both explanations involve an impairment in the ability to link eye gaze shifts with intentional explanations. This study also found a significant anticorrelation between STS activation in the incongruent-congruent contrast and social subscale score on the Autism Diagnostic Interview-Revised, but not scores on the other subscales.
In 2011, an fMRI study demonstrated that right temporoparietal junction (rTPJ) of higher-functioning adults with autism was not selectively activated more for mentalizing judgments when compared to physical judgments about self and other. rTPJ selectivity for mentalizing was also related to individual variation on clinical measures of social impairment; individuals whose rTPJ was increasingly more active for mentalizing compared to physical judgments were less socially impaired, while those who showed little to no difference in response to mentalizing or physical judgments were the most socially impaired. This evidence builds on work in typical development that suggests rTPJ is critical for representing mental state information, irrespective of whether it is about oneself or others. It also points to an explanation at the neural level for the pervasive mind-blindness difficulties in autism that are evident throughout the lifespan.
The brain regions associated with theory of mind include the superior temporal gyrus (STS), the temporoparietal junction (TPJ), the medial prefrontal cortex (MPFC), the precuneus, and the amygdala. The reduced activity in the MPFC of individuals with schizophrenia is associated with the theory of mind deficit (not the psychiatric medications) and may explain impairments in social function among people with schizophrenia. Increased neural activity in MPFC is related to better perspective-taking, emotion management, and increased social functioning. Disrupted brain activities ( due to psychiatric medications) in areas related to theory of mind may increase social stress or disinterest in social interaction, and contribute to the social dysfunction of schizophrenia.
An open question is if other animals besides humans have a genetic endowment and social environment that allows them to acquire a theory of mind in the same way that human children do. This is a contentious issue because of the problem of inferring from animal behavior the existence of thinking, of the existence of a concept of self or self-awareness, or of particular thoughts. One difficulty with non-human studies of ToM is the lack of sufficient numbers of naturalistic observations, giving insight into what the evolutionary pressures might be on a species' development of theory of mind.
Non-human research still has a major place in this field, however, and is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of what many claim to be a uniquely human aspect of social cognition. While it is difficult to study human-like theory of mind and mental states in species whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities. For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (or rather, what another being has seen). Call and Tomasello's study that looked at understanding of intention in orangutans, chimpanzees and children showed that all three species understood the difference between accidental and intentional acts. While it is notable that some chimpanzees have been taught how to communicate to humans through sign language, no chimpanzee has ever asked a human a question. Part of the difficulty in this line of research is that observed phenomena can often be explained as simple stimulus-response learning, as it is in the nature of any theorizers of mind to have to extrapolate internal mental states from observable behavior. Recently, most non-human theory of mind research has focused on monkeys and great apes, who are of most interest in the study of the evolution of human social cognition. Other studies relevant to attributions theory of mind have been conducted using plovers and dogs, and have shown preliminary evidence of understanding attention—one precursor of theory of mind—in others.
There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals. Two examples serve as demonstration: first, Povinelli et al. (1990) presented chimpanzees with the choice of two experimenters from which to request food: one who had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a bucket or bag over his head; a blindfold over his eyes; or being turned away from the baiting) does not know, and can only guess. They found that the animals failed in most cases to differentially request food from the "knower". By contrast, Hare, Call, and Tomasello (2001) found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached. William Field and Sue Savage-Rumbaugh have no doubt that bonobos have developed ToM and cite their communications with a well known captive bonobo, Kanzi, as evidence.
- Premack, D. G.; Woodruff, G. (1978). "Does the chimpanzee have a theory of mind?". Behavioral and Brain Sciences 1 (4): 515–526. doi:10.1017/S0140525X00076512.
- Korkmaz B (May 2011). "Theory of mind and neurodevelopmental disorders of childhood". Pediatr. Res. 69 (5 Pt 2): 101R–8R. doi:10.1203/PDR.0b013e318212c177. PMID 21289541.
- Uekermann J, Daum I (May 2008). "Social cognition in alcoholism: a link to prefrontal cortex dysfunction?". Addiction 103 (5): 726–35. doi:10.1111/j.1360-0443.2008.02157.x. PMID 18412750.
- Baron-Cohen, S. (1991). Precursors to a theory of mind: Understanding attention in others. In A. Whiten (Ed.), Natural theories of mind: Evolution, development and simulation of everyday mindreading (pp. 233-251). Oxford: Basil Blackwell.
- Bruner, J. S. (1981). Intention in the structure of action and interaction. In L. P. Lipsitt & C. K. Rovee-Collier (Eds.), Advances in infancy research. Vol. 1 (pp. 41-56). Norwood, NJ: Ablex Publishing Corporation.
- Gordon, R. M. (1996).'Radical' simulationism. In P. Carruthers & P. K. Smith, Eds. Theories of theories of mind. Cambridge: Cambridge University Press.
- Courtin, C. (2000). "The impact of sign language on the cognitive development of deaf children: The case of theories of mind". Journal of Deaf Studies and Deaf Education 5 (3): 266–276. doi:10.1093/deafed/5.3.266.
- Courtin, C.; Melot, A.-M. (2005). "Metacognitive development of deaf children: Lessons from the appearance-reality and false belief tasks". Developmental Science 8 (1): 16–25. doi:10.1111/j.1467-7687.2005.00389.x. PMID 15454505.
- de Waal, Franz B.M. (2007), "Commiserating Mice" Scientific American, 24 June 2007
- Demetriou, A., Mouyi, A., & Spanoudis, G. (2010). The development of mental processing. Nesselroade, J. R. (2010). Methods in the study of life-span human development: Issues and answers. In W. F. Overton (Ed.), Biology, cognition and methods across the life-span. Volume 1 of the Handbook of life-span development (pp. 36-55), Editor-in-chief: R. M. Lerner. Hoboken, NJ: Wiley.
- Hayes, S. C., Barnes-Holmes, D., & Roche, B. (2001). Relational frame theory: A post-Skinnerian account of human language and cognition. New York: Kluwer Academic/Plenum.
- Rehfeldt, R. A., and Barnes-Holmes, Y., (2009). Derived Relational Responding: Applications for learners with autism and other developmental disabilities. Oakland, CA: New Harbinger.
- McHugh, L. & Stewart, I. (2012). The self and perspective-taking: Contributions and applications from modern behavioral science. Oakland, CA: New Harbinger.
- Carruthers, P. (1996). Simulation and self-knowledge: a defence of the theory-theory. In P. Carruthers & P.K. Smith, Eds. Theories of theories of mind. Cambridge: Cambridge University Press.
- Dennett, D. (1987). The Intentional Stance. Cambridge: MIT Press.
- Fox, Eric. "Functional Contextualism". Association for Contextual Behavioral Science. Retrieved March 29, 2014.
- Dennett, D. C. (1987). "Reprint of Intentional systems in cognitive ethology: The Panglossian paradigm defended (to p. 260)". The Brain and Behavioral Sciences 6: 343–390.
- Call, J.; Tomasello, M. (1998). "Distinguishing intentional from accidental actions in orangutans (Pongo pygmaeus), chimpanzees (Pan troglodytes), and human children (Homo sapiens)". Journal of Comparative Psychology 112 (2): 192–206. doi:10.1037/0735-7036.112.2.192. PMID 9642787.
- Meltzoff, A. (1995). "Understanding the intentions of others: Re-enactment of intended acts by 18-month-old children". Developmental Psychology 31 (5): 838–850. doi:10.1037/0012-16220.127.116.118.
- Gagliardi JL, et al. (1995). "Seeing and knowing: Knowledge attribution versus stimulus control in adult humans (Homo sapiens)". Journal of Comparative Psychology 109 (2): 107–114. doi:10.1037/0735-7036.109.2.107. PMID 7758287.
- Meltzoff, A. N. (2002). Imitation as a mechanism of social cognition: Origins of empathy, theory of mind, and the representation of action. In U. Goswami (Ed.), Handbook of childhood cognitive development (pp. 6-25). Oxford: Blackwell Publishers.
- Horowitz, A. (2003). "Do humans ape? or Do apes human? Imitation and intention in humans and other animals". Journal of Comparative Psychology 17 (3): 325–336. doi:10.1037/0735-7036.117.3.325.
- Wimmer, H.; Perner, J. (1983). "Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception". Cognition 13 (1): 103–128. doi:10.1016/0010-0277(83)90004-5. PMID 6681741.
- Mitchell, P. (2011). Acquiring a Theory of Mind. In Alan Slater, & Gavin Bremner (eds.) An Introduction to Developmental Psychology: Second Edition, BPS Blackwell.
- Roessler, Johannes (2013). "When the Wrong Answer Makes Perfect Sense - How the Beliefs of Children Interact With Their Understanding of Competition, Goals and the Intention of Others". University of Warwick Knowledge Centre. August 2014. Retrieved 2013-08-15.
- Baron-Cohen, Simon; Leslie, Alan M.; Frith, Uta (1985). "Does the autistic child have a "theory of mind" ?". Cognition 21 (1): 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210.
- Mitchell, P. (2011). Acquiring a Theory of Mind. In Alan Slater, & Gavin Bremner (eds.) An Introduction to Developmental Psychology: Second Edition, BPS Blackwell. page 371
- Gopnik A, Aslington JW. Children's understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction.. Child Development. 1988;59(1):26–37. doi:10.2307/1130386. PMID 3342716.
- Zaitchik, D. (1990). "When representations conflict with reality: the preschooler's problem with false beliefs and "false" photographs". Cognition 35 (1): 41–68. doi:10.1016/0010-0277(90)90036-J. PMID 2340712.
- Leslie, A.; Thaiss, L. (1992). "Domain specificity in conceptual development". Cognition 43 (3): 225–51. doi:10.1016/0010-0277(92)90013-8. PMID 1643814.
- Sabbagh, M.A.; Moses, L.J.; Shiverick, S (2006). "Executive functioning and preschoolers' understanding of false beliefs, false photographs, and false signs". Child Development 77 (4): 1034–1049. doi:10.1111/j.1467-8624.2006.00917.x. PMID 16942504.
- Woodward, Infants selectively encode the goal object of an actor's reach, Cognition (1998)
- Leslie, A. M. (1991). Theory of mind impairment in autism. In A. Whiten (Ed.), Natural theories of mind: Evolution, development and simulation of everyday mindreading (pp. 63-77). Oxford: Basil Blackwell.
- Poulin-Dubois, Diane; Sodian, Beate; Metz, Ulrike; Tilden, Joanne; Schoeppner, Barbara (2007). "Out of Sight is Not Out of Mind: Developmental Changes in Infants' Understanding of Visual Perception During the Second Year". Journal of Cognition and Development 8 (4): 401–425. doi:10.1080/15248370701612951.
- Onishi, K. H.; Baillargeon, R (2005). "Do 15-Month-Old Infants Understand False Beliefs?". Science 308 (5719): 255–8. doi:10.1126/science.1107621. PMC 3357322. PMID 15821091.
- Poulin-Dubois, Diane; Chow, Virginia (2009). "The effect of a looker's past reliability on infants' reasoning about beliefs". Developmental Psychology 45 (6): 1576–82. doi:10.1037/a0016715. PMID 19899915.
- Moore, S. (2002). Asperger Syndrome and the Elementary School Experience. Shawnee Mission, KS: Autism Asperger Publishing Company.
- Baker, J. (2003). Social Skills Training: for children and adolescents with Asperger Syndrome and Social-Communication Problems. Mission, KS: Autism Asperger Publishing Company.
- Happe, FG (1995). "The role of age and verbal ability in the theory of mind task performance of subjects with autism". Child Development 66 (3): 843–55. doi:10.2307/1131954. PMID 7789204.
- Baron-Cohen, S. (1991). Precursors to a theory of mind: Understanding attention in others. In A. Whiten, Ed., Natural theories of mind: Evolution, development, and simulation of everyday mindreading (233-251). Cambridge, MA: Basil Blackwell.
- Leslie, A. M. (1991). Theory of mind impairment in autism. In A. Whiten, Ed., Natural theories of mind: Evolution, development, and simulation of everyday mindreading. Cambridge, MA: Basil Blackwell.
- Hobson, R.P. (1995). Autism and the development of mind. Hillsdale, N.J.: Lawrence Erlbaum Associates Ltd.
- Dapretto, M.; et al. (2006). "Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders". Nature Neuroscience 9 (1): 28–30. doi:10.1038/nn1611. PMC 3713227. PMID 16327784.
- Tine, Michele; Lucariello, Joan (2012). "Unique Theory of Mind Differentiation in Children with Autism and Asperger Syndrome". Autism Research and Treatment 2012: 1–11. doi:10.1155/2012/505393.
- Sprong, M.; Schothorst, P.; Vos, E.; Hox, J.; Van Engeland, H. (2007). "Theory of mind in schizophrenia". British Journal of Psychiatry 191 (1): 5–13. doi:10.1192/bjp.bp.107.035899.
- "New hope for people with schizophrenia" February 2000
- Ng, R.; Fish, S.; Granholm, E. (2015). "Insight and theory of mind in schizophrenia". Psychiatry Research 225 (1-2): 169–174. doi:10.1016/j.psychres.2014.11.010.
- Konstantakopoulos, G., Ploumpidis, D., Oulis, P., Patrikelis, P., Nikitopoulou, S., Papadimitriou, G. N., & David, A. S. (2014). The relationship between insight and theory of mind in schizophrenia. Schizophrenia Research, 152, 217-222. doi:10.1016/j.schres.2013.11.022
- Cassetta, B.; Goghari, V. (2014). "Theory of mind reasoning in schizophrenia patients and non-psychotic relatives". Psychiatry Research 218 (1-2): 12–19. doi:10.1016/j.psychres.2014.03.043.
- Mehta, U. M., Bhagyavathi, H. D., Kumar, C. N., Thirthalli, J., & Gangadhar, B. N. (2014). Cognitive deconstruction of parenting in schizophrenia: The role of theory of mind. Australian & New Zealand Journal of Psychiatry, 48(3), 249-258. doi:10.1177/0004867413500350
- Lee, L.; et al. (2005). "Mental state decoding abilities in clinical depression". Journal of Affective Disorders 86 (2-3): 247–58. doi:10.1016/j.jad.2005.02.007.
- Sabbagh, M. A. (2004). "Recognizing and reasoning about mental states: Understanding orbitofrontal contributions to theory of mind and autism". Brain and Cognition 55 (1): 209–19. doi:10.1016/j.bandc.2003.04.002.
- Harkness, K. L.; et al. (2005). "Enhanced accuracy of mental state decoding in dysphoric college students". Cognition and Emotion 19 (7): 999–1025. doi:10.1080/02699930541000110.
- Harkness, K. L.; et al. (2011). "Maternal history of depression is associated with enhanced theory of mind ability in depressed and non-depressed women". Psychiatry Research 189 (1): 91–96. doi:10.1016/j.psychres.2011.06.007.
- Gallagher, Helen L.; Frith, Christopher D. (2003). "Functional imaging of 'theory of mind'". Trends in Cognitive Sciences 7 (2): 77–83. doi:10.1016/S1364-6613(02)00025-6. PMID 12584026.
- Saxe, R; Kanwisher, N (2003). "People thinking about thinking peopleThe role of the temporo-parietal junction in "theory of mind"". NeuroImage 19 (4): 1835–42. doi:10.1016/S1053-8119(03)00230-1. PMID 12948738.
- Saxe, Rebecca; Schulz, Laura E.; Jiang, Yuhong V. (2006). "Reading minds versus following rules: Dissociating theory of mind and executive control in the brain". Social Neuroscience 1 (3–4): 284–98. doi:10.1080/17470910601000446. PMID 18633794.
- Saxe, R.; Powell, L. J. (2006). "It's the Thought That Counts: Specific Brain Regions for One Component of Theory of Mind". Psychological Science 17 (8): 692–9. doi:10.1111/j.1467-9280.2006.01768.x. PMID 16913952.
- Decety, J.; Lamm, C. (2007). "The Role of the Right Temporoparietal Junction in Social Interaction: How Low-Level Computational Processes Contribute to Meta-Cognition". The Neuroscientist 13 (6): 580–93. doi:10.1177/1073858407304654. PMID 17911216.
- Mitchell, J. P. (2007). "Activity in Right Temporo-Parietal Junction is Not Selective for Theory-of-Mind". Cerebral Cortex 18 (2): 262–71. doi:10.1093/cercor/bhm051. PMID 17551089.
- Scholz, Jonathan; Triantafyllou, Christina; Whitfield-Gabrieli, Susan; Brown, Emery N.; Saxe, Rebecca (2009). Lauwereyns, Jan, ed. "Distinct Regions of Right Temporo-Parietal Junction Are Selective for Theory of Mind and Exogenous Attention". PLoS ONE 4 (3): e4869. doi:10.1371/journal.pone.0004869. PMC 2653721. PMID 19290043.
- Castelli, Fulvia; Happé, Francesca; Frith, Uta; Frith, Chris (2000). "Movement and Mind: A Functional Imaging Study of Perception and Interpretation of Complex Intentional Movement Patterns". NeuroImage 12 (3): 314–25. doi:10.1006/nimg.2000.0612. PMID 10944414.
- Martin, Alex; Weisberg, Jill (2003). "Neural Foundations for Understanding Social and Mechanical Concepts". Cognitive Neuropsychology 20 (3–6): 575–87. doi:10.1080/02643290342000005. PMC 1450338. PMID 16648880.
- Schultz, R. T.; Grelotti, D. J.; Klin, A.; Kleinman, J.; Van Der Gaag, C.; Marois, R.; Skudlarski, P. (2003). "The role of the fusiform face area in social cognition: Implications for the pathobiology of autism". Philosophical Transactions of the Royal Society B: Biological Sciences 358 (1430): 415–427. doi:10.1098/rstb.2002.1208.
- Schultz, Johannes; Friston, Karl J.; O'Doherty, John; Wolpert, Daniel M.; Frith, Chris D. (2005). "Activation in Posterior Superior Temporal Sulcus Parallels Parameter Inducing the Percept of Animacy". Neuron 45 (4): 625–35. doi:10.1016/j.neuron.2004.12.052. PMID 15721247.
- Allison, Truett; Puce, Aina; McCarthy, Gregory (2000). "Social perception from visual cues: Role of the STS region". Trends in Cognitive Sciences 4 (7): 267–278. doi:10.1016/S1364-6613(00)01501-1. PMID 10859571.
- Morris, James P.; Pelphrey, Kevin A.; McCarthy, Gregory (2008). "Perceived causality influences brain activity evoked by biological motion". Social Neuroscience 3 (1): 16–25. doi:10.1080/17470910701476686. PMID 18633843.
- Pelphrey, Kevin A.; Morris, James P.; McCarthy, Gregory (2004). "Grasping the Intentions of Others: The Perceived Intentionality of an Action Influences Activity in the Superior Temporal Sulcus during Social Perception". Journal of Cognitive Neuroscience 16 (10): 1706–16. doi:10.1162/0898929042947900. PMID 15701223.
- Mosconi, Matthew W.; Mack, Peter B.; McCarthy, Gregory; Pelphrey, Kevin A. (2005). "Taking an "intentional stance" on eye-gaze shifts: A functional neuroimaging study of social perception in children". NeuroImage 27 (1): 247–52. doi:10.1016/j.neuroimage.2005.03.027. PMID 16023041.
- Brass, Marcel; Schmitt, Ruth M.; Spengler, Stephanie; Gergely, György (2007). "Investigating Action Understanding: Inferential Processes versus Action Simulation". Current Biology 17 (24): 2117–21. doi:10.1016/j.cub.2007.11.057. PMID 18083518.
- Saxe, R; Xiao, D.-K; Kovacs, G; Perrett, D.I; Kanwisher, N (2004). "A region of right posterior superior temporal sulcus responds to observed intentional actions". Neuropsychologia 42 (11): 1435–46. doi:10.1016/j.neuropsychologia.2004.04.015. PMID 15246282.
- Rowe, Andrea D; Bullock, Peter R; Polkey, Charles E; Morris, Robin G (2001). "`Theory of mind' impairments and their relationship to executive functioning following frontal lobe excisions". Brain 124 (3): 600–616. doi:10.1093/brain/124.3.600. PMID 11222459.
- Samson, Dana; Apperly, Ian A; Chiavarino, Claudia; Humphreys, Glyn W (2004). "Left temporoparietal junction is necessary for representing someone else's belief". Nature Neuroscience 7 (5): 499–500. doi:10.1038/nn1223. PMID 15077111.
- Stone, Valerie E.; Gerrans, Philip (2006). "What's domain-specific about theory of mind?". Social Neuroscience 1 (3–4): 309–19. doi:10.1080/17470910601029221. PMID 18633796.
- Rizzolatti, Giacomo; Craighero, Laila (2004). "The Mirror-Neuron System". Annual Review of Neuroscience 27 (1): 169–92. doi:10.1146/annurev.neuro.27.070203.144230. PMID 15217330.
- Iacoboni, Marco; Molnar-Szakacs, Istvan; Gallese, Vittorio; Buccino, Giovanni; Mazziotta, John C.; Rizzolatti, Giacomo (2005). "Grasping the Intentions of Others with One's Own Mirror Neuron System". PLoS Biology 3 (3): e79. doi:10.1371/journal.pbio.0030079. PMC 1044835. PMID 15736981.
- Gallese, V; Goldman, A (1998). "Mirror neurons and the simulation theory of mind-reading". Trends in Cognitive Sciences 2 (12): 493–501. doi:10.1016/S1364-6613(98)01262-5. PMID 21227300.
- Frith, U.; Frith, C. D. (2003). "Development and neurophysiology of mentalizing". Philosophical Transactions of the Royal Society B: Biological Sciences 358 (1431): 459–73. doi:10.1098/rstb.2002.1218. PMC 1693139. PMID 12689373.
- Meltzoff, A. N.; Decety, J. (2003). "What imitation tells us about social cognition: A rapprochement between developmental psychology and cognitive neuroscience". Philosophical Transactions of the Royal Society B: Biological Sciences 358 (1431): 491–500. doi:10.1098/rstb.2002.1261.
- Sommerville, Jessica A.; Decety, Jean (2006). "Weaving the fabric of social interaction: Articulating developmental psychology and cognitive neuroscience in the domain of motor cognition". Psychonomic Bulletin & Review 13 (2): 179–200. doi:10.3758/BF03193831. PMID 16892982.
- Gallagher, Shaun (2007). "Simulation trouble". Social Neuroscience 2 (3–4): 353–65. doi:10.1080/17470910601183549. PMID 18633823.
- Gallagher, Shaun (2008). "Mirror Neuron Systems". Mirror Neuron Systems: 355–371. doi:10.1007/978-1-59745-479-7_16. ISBN 978-1-934115-34-3.
- Haroush K., Williams Z. (2015). "Neuronal Prediction of Opponent's Behavior during Cooperative Social Interchange in Primates". Cell 160 (6): 1233–1245. doi:10.1016/j.cell.2015.01.045.
- Sanfey AG, Civai C, Vavra P. (2015). "Predicting the other in cooperative interactions". Trends Cogn Sci. 19 (7): 364–365. doi:10.1016/j.tics.2015.05.009.
- Happe, F; et al. (1996). "'Theory of mind' in the brain. Evidence from a PET scan study of Asperger syndrome". NeuroReport 8 (1): 197–201. doi:10.1097/00001756-199612200-00040. PMID 9051780.
- Fletcher, PC; et al. (1995). "Other minds in the brain: a functional imaging study of 'theory of mind' in story comprehension". Cognition 57 (2): 109–128. doi:10.1016/0010-0277(95)00692-R. PMID 8556839.
- Baron-Cohen; et al. (1999). "Social intelligence in the normal and autistic brain: an fMRI study". European Journal of Neuroscience 11 (6): 1891–1898. doi:10.1046/j.1460-9568.1999.00621.x.
- Castelli, F; et al. (2002). "Autism, Asperger syndrome and brain mechanisms for the attribution of mental states to animated shapes". Brain 125 (Pt 8): 1839–1849. doi:10.1093/brain/awf189. PMID 12135974.
- Pelphrey, KA; et al. (2005). "Neural basis of eye gaze processing deficits in autism". Brain 128 (Pt 5): 1038–1048. doi:10.1093/brain/awh404. PMID 15758039.
- Lombardo MV, Chakrabarti B, Bullmore ET, MRC AIMS Consortium, Baron-Cohen S. Specialization of right temporo-parietal junction for mentalizing and its relation to social impairments in autism. Neuroimage. 2011;56(3):1832–1838. doi:10.1016/j.neuroimage.2011.02.067. PMID 21356316.
- Senju A, Southgate V, White S, Frith U. Mindblind eyes: an absence of spontaneous theory of mind in Asperger syndrome. Science. 2009;325(5942):883–885. doi:10.1126/science.1176170. PMID 19608858.
- Pedersen, A.; Koelkebeck, K.; Brandt, M.; Wee, M.; Kueppers, K. A.; Kugel, H.; Kohl, W.; Bauer, J.; Ohrmann, P. (2012). "Theory of mind in patients with schizophrenia: Is mentalizing delayed?". Schizophrenia Research 137 (1-3): 224–229. doi:10.1016/j.schres.2012.02.022.
- Dodell-Feder, D., Tully, L. M., Lincoln, S. H., & Hooker, C. I. (2013). The neural basis of theory of mind and its relationship to social functioning and social anhedonia in individuals with schizophrenia. NeuroImage: Clinical, 4, 154-163. doi:10.1016/j.nicl.2013.11.006
- Ristau, Carolyn A. (1991). "Aspects of the cognitive ethology of an injury-feigning bird, the piping plovers". In Ristau, Carolyn A. Cognitive Ethology: Essays in Honor of Donald R. Griffin. Hillsdale, New Jersey: Lawrence Erlbaum. pp. 91–126. ISBN 978-1-134-99085-6.
- Horowitz, Alexandra (2008). "Attention to attention in domestic dog (Canis familiaris) dyadic play". Animal Cognition 12 (1): 107–18. doi:10.1007/s10071-008-0175-y. PMID 18679727.
- Povinelli, Daniel J.; Vonk, Jennifer (2003). "Chimpanzee minds: Suspiciously human?". Trends in Cognitive Sciences 7 (4): 157–160. doi:10.1016/S1364-6613(03)00053-6. PMID 12691763.
- Povinelli, D.J.; Nelson, K.E.; Boysen, S.T. (1990). "Inferences about guessing and knowing by chimpanzees (Pan troglodytes)". Journal of Comparative Psychology 104 (3): 203–210. doi:10.1037/0735-7036.104.3.203. PMID 2225758.
- Hare, B.; Call, J.; Tomasello, M. (2001). "Do chimpanzees know what conspecifics know and do not know?". Animal Behavior 61 (1): 139–151. doi:10.1006/anbe.2000.1518. PMID 11170704.
- Hamilton, Jon (8 July 2006). "A Voluble Visit with Two Talking Apes". NPR. Retrieved 21 March 2012.
- Excerpts taken from: Davis, E. (2007) Mental Verbs in Nicaraguan Sign Language and the Role of Language in Theory of Mind. Undergraduate senior thesis, Barnard College, Columbia University.
|Wikibooks has a book on the topic of: Consciousness|
- The Computational Theory of Mind
- The Identity Theory of Mind
- Sally-Anne and Smarties tests
- Functional Contextualism
- Theory of Mind article in the Internet Encyclopedia of Philosophy |
Decide which measure of central tendency best describes a data set.
Find the mean, median, mode and range of a data set.
Learn 3 measures of data used in math: mean, median, and mode. An introduction to descriptive statistics and central tendency.
This video demonstrates a sample application of measures of central tendency.
This video provides an explanation of the concept of measures of central tendency.
A list of student-submitted discussion questions for Measures of Central Tendency and Dispersion.
Come up with questions about a topic and learn new vocabulary to determine answers using the table
Learn new vocabulary words and help remember them by coming up with your own sentences with the new words using a Stop and Jot table.
Learn how to distinguish between mean and median to avoid being misled by the "average" of data.
Students will examine the Median, Mean, Mode, and Range of the salaries of MLS players. They will explain what each measure of central tendency says about the salaries of all MLS players.
Discover how tax rates are determined in the United States.
This study guide looks at levels of measurement and the shape, measures of center (median, mean, mode), and measures of spread (standard deviation) of a data set. It also compares the measures for population vs the measures for sample. |
Once upon a time, two deities, the male Izanagi and the female Izanami, came down from Takamagahara (The Plains of High Heaven) to a watery world in order to create land. Droplets from Izanagi’s ‘spear’ solidified into the land now known as Japan. Izanami and Izanagi then populated the new land with gods. One of these was Japan’s supreme deity, the Sun Goddess Amaterasu (Light of Heaven), whose great-great grandson Jimmu was to become the first emperor of Japan, reputedly in 660 BC.
Such is the seminal creation myth of Japan. More certainly, humans were present in Japan at least 200, 000 years ago, though the earliest human remains go back only 30, 000 years or so. Till around the end of the last Ice Age some 15, 000 years ago, Japan was linked to the continent by a number of landbridges – Siberia to the north, Korea to the west and probably China through Taiwan to the south – so access was not difficult.
Amid undoubted diversity, the first recognisable culture to emerge was the Neolithic Jōmon (named after a ‘rope mark’ pottery style), from around 13, 000 BC. The Jōmon were mostly hunter-gatherers, with a preference for coastal regions, though agriculture started to develop from around 4000 BC and this brought about greater stability in settlement and the emergence of larger tribal communities. The present-day indigenous Ainu people of northern Japan are of Jōmon descent.
From around 400 BC Japan was effectively invaded by waves of immigrants later known as Yayoi (from the site where their distinctive reddish wheel-thrown pottery was first found). They first arrived in the southwest, probably through the Korean peninsula. Their exact origins are unknown, and may well be diverse, but they brought with them iron and bronze technology, and highly productive wet rice-farming techniques. In general they were taller and less stocky than the Jōmon – though a Chinese document from the 1st century AD nonetheless refers to Japan (by this stage quite heavily peopled by the Yayoi) as ‘The Land of the Dwarfs’!
Opinion is divided as to the nature of Yayoi relations with the Jōmon, but the latter were gradually displaced and forced ever further north. The Yayoi had spread to the middle of Honshū by the 1st century AD, but Northern Honshū could still be considered ‘Jōmon’ till at least the 8th century. With the exception of the Ainu, present-day Japanese are overwhelmingly of Yayoi descent.
Other consequences of the Yayoi Advent included greater intertribal/regional trade based on greater and more diverse production through new technologies. At the same time there was increased rivalry between tribal/regional groups, often over resources, and greater social stratification.
Agriculture-based fixed settlement led to the consolidation of territory and the establishment of boundaries. According to Chinese sources, by the end of the 1st century AD there were more than a hundred kingdoms in Japan, and by the mid-3rd century these were largely subject to an ‘over-queen’ named Himiko, whose own territory was known as Yamatai (later Yamato). The location of Yamatai is disputed, with some scholars favouring northwest Kyūshū, but most preferring the Nara region. The Chinese treated Himiko as sovereign of all Japan – the name Yamato eventually being applied to Japan as a whole – and she acknowledged her allegiance to the Chinese emperor through tribute.
On her death in 248 she is said to have been buried – along with a hundred sacrificed slaves – in a massive barrow-like tomb known as a kofun, indicative of the growing importance of status. Other dignitaries chose burial in similar tombs, and so from this point until the establishment of Nara as a capital in 710, this time is referred to as the Kofun or Yamato period.
The period saw the confirmation of the Yamato as the dominant – indeed imperial – clan in Japan. Their consolidation of power often appears to have been by negotiation and alliance with (or incorporation of) powerful potential foes. This was a practice Japan was to continue through the ages where possible, though it was less accommodating in the case of perceived weaker foes.
The first verifiable emperor was Suijin (died around 318), very likely of the Yamato clan, though some scholars think he may have been leader of a group of ‘horse-riders’ who appear to have come into Japan around the start of the 4th century from the Korean peninsula. The period also saw the adoption of writing, based on Chinese but first introduced by scholars from the Korean kingdom of Paekche in the mid-5th century. Scholars from Paekche also introduced Buddhism a century later.
Buddhism was promoted by the Yamato rulers as a means of unification and control of the land. Though Buddhism originated in India it was seen by the Japanese as a Chinese religion, and was one of a number of ‘things Chinese’ that they adopted to achieve recognition – especially by China – as a civilised country. By emulating China, Japan hoped it could become as powerful. The desire to learn from the strongest/best is another enduring Japanese characteristic.
In 604 the regent Prince Shōtoku (573–620) enacted a constitution of 17 articles, with a very Chinese and indeed Confucianist flavour, esteeming harmony and hard work. Major Chinese-style reforms followed some decades later in 645, such as centralisation of government, nationalisation and allocation of land, and law codes. To strengthen its regime, under Emperor Temmu (r 673–686) the imperial family initiated the compilation of historical works such as the Kojiki (Record of Old Things, 712) and Nihon Shoki (Record of Japan, 720), with the aim of legitimising their power through claimed divine descent. It had the desired effect, and despite a number of perilous moments, Japan continues to have the longest unbroken monarchic line in the world.
Emulation of things Chinese was not indiscriminate. For example, in China Confucianism condoned the removal of an unvirtuous ruler felt to have lost the ‘mandate of heaven’, but this idea was not promoted in Japan. Nor was the Chinese practice of allowing achievement of high rank through examination, for the Japanese ruling class preferred birth over merit.
Northern Japan aside, in terms of factors such as effective unification, centralised government, social stratification, systematic administration, external recognition, legitimisation of power, a written constitution and a legal code, Japan, with its estimated five million people, could be said to have formed a nation-state by the early 8th century.
In 710 an intended permanent capital was established at Nara (Heijō), built to a Chinese grid pattern. The influence of Buddhism in those days is still seen today in the Tōdai-ji, which houses a huge bronze Buddha and is the world’s largest wooden building (and one of the oldest).
In 784 Emperor Kammu (r 781–806) decided to relocate the capital. His reasons are unclear, but may have beenrelated to an inauspicious series of disasters, including a massive smallpox epidemic (735–37) that killed as many as one-third of the population. The capital was transferred to nearby Kyoto (Heian) in 794, newly built on a similar grid pattern. It was to remain Japan’s capital for more than a thousand years – though not necessarily as the centre of actual power.
Over the next few centuries, courtly life in Kyoto reached a pinnacle of refined artistic pursuits and etiquette, captured famously in the novel The Tale of Genji, written by the court-lady Murasaki Shikibu around 1004. It showed a world where courtiers indulged in amusements, such as guessing flowers by their scent, building extravagant follies and sparing no expense to indulge in the latest luxury. On the positive side, it was a world that encouraged aesthetic sensibilities, such as mono no aware (the bitter-sweetness of things) and okashisa (pleasantly surprising incongruity), which were to endure right through to the present day. But on the negative side, it was also a world increasingly estranged from the real one. Put bluntly, it lacked muscle. The effeteness of the court was exacerbated by the weakness of the emperors, manipulated over centuries by the intrigues of the notorious and politically dominant Fujiwara family, who effectively ruled the country.
By contrast, while the major nobles immersed themselves in courtly pleasures and/or intrigues, out in the real world of the provinces, powerful military forces were developing. They were typically led by minor nobles, often sent out on behalf of court-based major nobles to carry out ‘tedious’ local gubernatorial and administrative duties. Some were actually distant imperial family members, barred from succession claims – a practice known as ‘dynastic shedding’ – and often hostile to the court. Their retainers included skilled warriors known as samurai (literally ‘retainer’).
The two main ‘shed’ families were the Minamoto (also known as Genji) and the Taira (Heike), who were basically enemies. In 1156 they were employed to assist rival claimants to the headship of the Fujiwara family, though these figures soon faded into the background, as the struggle developed into a feud between the Minamoto and the Taira.
The Taira prevailed, under Kiyomori (1118–81), who based himself in the capital and, over the next 20 years or so, fell prey to many of the vices that lurked there. In 1180, following a typical court practice, he enthroned his own two-year-old grandson, Antoku. However, a rival claimant requested the help of the Minamoto, who had regrouped under Yoritomo (1147–99) in Izu. Yoritomo was more than ready to agree.
Both Kiyomori and the claimant died very shortly afterwards, but Yoritomo and his younger half-brother Yoshitsune (1159–89) continued the campaign against the Taira – a campaign interrupted by a pestilence during the early 1180s. By 1185 Kyoto had fallen and the Taira had been pursued to the western tip of Honshū. A naval battle ensued (at Dannoura) and the Minamoto were victorious. In a well-known tragic tale, Kiyomori’s widow clasped her grandson Antoku (now aged seven) and leaped with him into the sea, rather than have him surrender. Minamoto Yoritomo was now the most powerful man in Japan, and was to usher in a martial age.
Yoritomo did not seek to become emperor, but rather to have the new emperor confer legitimacy on him through the title of shōgun (generalissimo). This was granted in 1192. Similarly, he left many existing offices and institutions in place – though often modified – and set up his base in his home territory of Kamakura, rather than Kyoto. In theory he represented merely the military arm of the emperor’s government, but in practice he was in charge of government in the broad sense. His ‘shōgunate’ was known in Japanese as the bakufu, meaning the tent headquarters of a field general, though it was far from temporary. As an institution, it was to last almost 700 years.
The system of government now became feudal, centred on a lord-vassal system in which loyalty was a key value. It tended to be more personal and more ‘familial’ than medieval European feudalism, particularly in the extended oya-ko relationship (‘parent-child’, in practice ‘father-son’). This ‘familial hierarchy’ was to become another enduring feature of Japan.
But ‘families’ – even actual blood families – were not always happy, and the more ruthless power seekers would not hesitate to kill family members they saw as threats. Yoritomo himself, seemingly very suspicious by nature, killed off so many of his own family there were serious problems with the shōgunal succession upon his death in 1199 (following a fall from his horse in suspicious circumstances). One of those he had killed was his half-brother Yoshitsune, who earned an enduring place in Japanese literature and legend as the archetypical tragic hero.
Yoritomo’s widow Masako (1157–1225) was a formidable figure, arranging shōgunal regents and controlling the shōgunate for much of her remaining life. Having taken religious vows on her husband’s death, she became known as the ‘nun shōgun’, and one of the most powerful women in Japanese history. She was instrumental in ensuring that her own family, the Hōjō, replaced the Minamoto as shōguns. The Hōjō shōgunate continued to use Kamakura as the shōgunal base, and was to endure till the 1330s.
It was during their shōgunacy that the Mongols twice tried to invade, in 1274 and 1281. The Mongol empire was close to its peak at this time, under Kublai Khan (r 1260–94). After conquering Korea in 1259 he sent requests to Japan to submit to him, but these were ignored.
His expected first attack came in November 1274, allegedly with some 900 vessels carrying around 40, 000 men – many of them reluctant Korean conscripts – though these figures may be exaggerated. They landed near Hakata in northwest Kyūshū and, despite spirited Japanese resistance, made progress inland. However, for unclear reasons, they presently retreated to their ships. Shortly afterwards a violent storm blew up and damaged around a third of the fleet, after which the remainder returned to Korea.
A more determined attempt was made seven years later from China. Allegedly, Kublai ordered the construction of a huge fleet of 4400 warships to carry a massive force of 140, 000 men – again, questionable figures. They landed once more in northwest Kyūshū in August 1281. Once again they met spirited resistance and had to retire to their vessels, and once again the weather soon intervened. This time a typhoon destroyed half their vessels – many of which were actually designed for river use, without keels, and unable to withstand rough conditions. The survivors returned to China, and there were no further Mongol invasions of Japan.
It was the typhoon of 1281 in particular that led to the idea of divine intervention to save Japan, with the coining of the term shinpū or kamikaze (both meaning ‘divine wind’). Later this came to refer to the Pacific War suicide pilots who, said to be infused with divine spirit, gave their lives in the cause of protecting Japan from invasion. It also led the Japanese to feel that their land was indeed the Land of the Gods.
Despite the successful defence, the Hōjō shōgunate suffered. It was unable to make a number of promised payments to the warrior families involved, which brought considerable dissatisfaction, while the payments it did make severely depleted its finances.
It was also during the Hōjō shōgunacy that Zen Buddhism was brought from China. Its austerity and self-discipline appealed greatly to the warrior class, and it was also a factor in the appeal of aesthetic values such as sabi (elegant simplicity). More popular forms of Buddhism were the Jōdo (Pure Land) and Jōdo Shin (True Pure Land) sects, based on salvation through invocation of Amida Buddha.
Dissatisfaction towards the Hōjō shōgunate came to a head under the unusually assertive emperor Go-Daigo (1288–1339), who, after escaping from exile imposed by the Hōjō, started to muster anti-shōgunal support in Western Honshū. In 1333 the shōgunate despatched troops to counter the rebellion under one of its most promising generals, the young Ashikaga Takauji (1305–58). However, Takauji was aware of the dissatisfaction towards the Hōjō and realised that he and Go-Daigo had considerable military strength between them. He abandoned the shōgunate and threw in his lot with the emperor, attacking the shōgunal offices in Kyoto. Others soon rebelled against the shōgunate in Kamakura itself.
This was the end for the Hōjō shōgunate, but not for the shōgunal institution. Takauji wanted the title of shōgun for himself, but his ally Go-Daigo was reluctant to confer it, fearing it would weaken his own imperial power. A rift developed, and Go-Daigo sent forces to attack Takauji. When Takauji emerged victorious, he turned on Kyoto, forcing Go-Daigo to flee into the hills of Yoshino some 100km south of the city, where he set up a court in exile. In Kyoto, Takauji installed a puppet emperor from a rival line who returned the favour by declaring him shōgun in 1338. Thus there were two courts in coexistence, which continued until 1392 when the ‘southern court’ (at Yoshino) was betrayed by Ashikaga Yoshimitsu (1358–1408), Takauji’s grandson and third Ashikaga shogun, who promised reconciliation but very soon ‘closed out’ the southern court.
Takauji set up his shōgunal base in Kyoto, at Muromachi, which gives its name to the period of the Ashikaga shōgunate. Notable shōguns include Takauji himself and his grandson Yoshimitsu, who among other things had Kyoto’s famous Kinkaku-ji (Golden Temple; p343) built, and once declared himself ‘King of Japan’. However, the majority of Ashikaga shōguns were relatively weak. In the absence of strong centralised government and control, the country slipped increasingly into civil war. Regional warlords, who came to be known as daimyō (big names), vied with each other in seemingly interminable feuds and power struggles. Eventually, starting with the Ōnin War of 1467–77, the country entered a period of virtually constant civil war. This was to last for the next hundred years, a time appropriately known as the Sengoku (Warring States) era.
Ironically perhaps, it was during the Muromachi period that a new flourishing of the arts took place, such as in the refined nō drama, ikebana (flower arranging) and cha-no-yu (tea ceremony). Key aesthetics were yūgen (elegant and tranquil otherworldliness, as seen in nō), wabi (subdued taste), kare (severe and unadorned) and the earlier-mentioned sabi (elegant simplicity).
The later stages of the period also saw the first arrival of Europeans, specifically three Portuguese traders blown ashore on the island of Tanegashima, south of Kyūshū, in 1543. Presently other Europeans arrived, bringing with them two important items, Christianity and firearms (mostly arquebuses). They found a land torn apart by warfare, ripe for conversion to Christianity – at least in the eyes of missionaries such as (St) Francis Xavier, who arrived in 1549 – while the Japanese warlords were more interested in the worldly matter of firearms.
One of the most successful warlords to make use of firearms was Oda Nobunaga (1534–82), from what is now Aichi Prefecture. Though starting from a relatively minor power base, his skilled and ruthless generalship resulted in a series of victories over rivals. In 1568 he seized Kyoto in support of the shōgunal claim of one of the Ashikaga clan (Yoshiaki), duly installed him, but then in 1573 drove him out and made his own base at Azuchi. Though he did not take the title of shōgun himself, Nobunaga was the supreme power in the land.
Noted for his brutality, he was not a man to cross. In particular he hated Buddhist priests, whom he saw as troublesome, and tolerated Christianity as a counterbalance to them. His ego was massive, leading him to erect a temple where he could be worshipped, and to declare his birthday a national holiday. His stated aim was Tenka Fubu (A Unified Realm under Military Rule) and he went some way to achieving this unification by policies such as strategic redistribution of territories among the daimyō, land surveys, and standardisation of weights and measures.
In 1582 he was betrayed by one of his generals and forced to commit suicide. However, the work of continuing unification was carried on by another of his generals, Toyotomi Hideyoshi (1536–98), a footsoldier who had risen through the ranks to become Nobunaga’s favourite. He, too, was an extraordinary figure. Small and simian in his features, Nobunaga had nicknamed him Saru-chan (Little Monkey), but his huge will for power belied his physical smallness. He disposed of potential rivals among Nobunaga’s sons, took the title of regent, continued Nobunaga’s policy of territorial redistribution and also insisted that daimyō should surrender their families to him as hostages to be kept in Kyoto – his base being at Momoyama. He also banned weapons for all classes except samurai.
Hideyoshi became increasingly paranoid, cruel and megalomaniacal in his later years. Messengers who gave him bad news would be sawn in half, and young members of his own family executed for suspected plotting. He also issued the first expulsion order of Christians (1587), whom he suspected of being an advance guard for an invasion. This order was not necessarily enforced, but in 1597 he crucified 26 Christians – nine of them European. His grand scheme for power included a pan-Asian conquest, and as a first step he attempted an invasion of Korea in 1592, which failed amid much bloodshed. He tried again in 1597, but the campaign was abandoned when he died of illness in 1598.
On his deathbed Hideyoshi entrusted the safeguarding of the country, and the succession of his young son Hideyori (1593–1615), whom he had unexpectedly fathered late in life, to one of his ablest generals, Tokugawa Ieyasu (1542–1616). However, upon Hideyoshi’s death, Ieyasu betrayed that trust. In 1600, in the Battle of Sekigahara, he defeated those who were trying to protect Hideyori, and became effectively the overlord of Japan. In 1603 his power was legitimised when the emperor conferred on him the title of shōgun. His Kantō base, the once tiny fishing village of Edo – later to be renamed Tōkyō – now became the real centre of power and government in Japan.
Through these three men, by fair means or more commonly foul, the country had been reunified within three decades.
Having secured power for the Tokugawa, Ieyasu and his successors were determined to retain it. Their basic strategy was of a linked two-fold nature: enforce the status quo and minimise potential for challenge. Orthodoxy and strict control (over military families in particular) were key elements.
Policies included requiring authorisation for castle building and marriages, continuing strategic redistribution (or confiscation) of territory, and, importantly, requiring daimyō and their retainers to spend every second year at Edo, with their families kept there permanently as hostages. In addition the shōgunate directly controlled ports, mines, major towns and other strategic areas. Movement was severely restricted by deliberate destruction of many bridges, the implementation of checkpoints and requirements for written travel authority, the banning of wheeled transport, the strict monitoring of potentially ocean-going vessels, and the banning of overseas travel for Japanese and even the return of those already overseas. Social movement was also banned, with society divided into four main classes: in descending order, shi (samurai), nō (farmers), kō (artisans) and shō (merchants). Detailed codes of conduct applied to each of these classes, even down to clothing and food and housing – right down to the siting of the toilet!
Christianity, though not greatly popular, threatened the authority of the shōgunate. Thus Christian missionaries were expelled in 1614. In 1638 the bloody quelling of the Christian-led Shimabara Uprising (near Nagasaki) saw Christianity banned and Japanese Christians – probably several hundred thousand – forced into hiding. All Westerners except the Protestant Dutch were expelled. The shōgunate found Protestantism less threatening than Catholicism – among other things it knew the Vatican could muster one of the biggest military forces in the world – and would have been prepared to let the British stay on if the Dutch, showing astute commercial one-upmanship, had not convinced it that Britain was a Catholic country. Nevertheless, the Dutch were confined geographically to a tiny trading base on the man-made island of Dejima, near Nagasaki, and numerically to just a few dozen men.
Thus Japan entered an era of sakoku (secluded country) that was to last for more than two centuries. Within the isolated and severely prescribed world of Tokugawa Japan, the breach of even a trivial law could mean execution. Even mere ‘rude behaviour’ was a capital offence, and the definition of this was ‘acting in an unexpected manner’. Punishments could be cruel, such as crucifixion, and could be meted out collectively or by proxy (for example, a village headman could be punished for the misdeed of a villager). Secret police were used to report on misdeeds.
As a result, people at large learned the importance of obedience to authority, of collective responsibility and of ‘doing the right thing’. These are values still prominent in present-day Japan.
For all the constraints there was nevertheless a considerable dynamism to the period, especially among the merchants, who as the lowest class were often ignored by the authorities and thus had relative freedom. They prospered greatly from the services and goods required for the daimyō processions to and from Edo, entailing such expense that daimyō had to convert much of their domainal produce into cash. This boosted the economy in general.
A largely pleasure-oriented merchant culture thrived, and produced the popular kabuki drama, with its colour and stage effects. Other entertainments included bunraku (puppet theatre), haiku (17-syllable verses), popular novels and ukiyoe (wood-block prints), often of female geisha, who came to the fore in this period. (Earlier geisha – meaning ‘artistic person’ – were male.)
Samurai, for their part, had no major military engagements. Well educated, most ended up fighting mere paper wars as administrators and managers. Ironically, it was during this period of relative inactivity that the renowned samurai code of bushidō was formalised, largely to justify the existence of the samurai class – some 6% of the population – by portraying them as moral exemplars. Though much of it was idealism, occasionally the code was put into practice, such as the exemplary loyalty shown by the Forty-Seven rōnin (masterless samurai) in 1701–03, who waited two years to avenge the unfair enforced suicide by seppuku (disembowelment) of their lord. After killing the man responsible, they in turn were all obliged to commit seppuku.
In more general terms, Confucianism was officially encouraged with the apparent aim of reinforcing the idea of hierarchy and status quo. Though this was clearly not in the best interests of women, it encouraged learning, and along with this, literacy. By the end of the period as many as 30% of the population of 30 million were literate – far ahead of the Western norm at the time. In some opposition to the ‘Chinese learning’ represented by Confucianism, there was also a strong trend of nationalism, centred on Shintō and the ancient texts. This was unhelpful to the shōgunate as it tended to focus on the primacy of the emperor. Certainly, by the early-mid-19th century, there was considerable dissatisfaction towards the shōgunate, fanned also by corruption and incompetence among shōgunal officials.
It is questionable how much longer the Tokugawa shōgunate and its secluded world could have continued, but as it happened, external forces were to bring about its demise.
Since the start of the 19th century a number of Western vessels had appeared in Japanese waters. Any Westerners who dared to land, even through shipwreck, were almost always met with expulsion or even execution.
This was not acceptable to the Western powers, especially the USA, which was keen to expand its interests across the Pacific and had numerous whaling vessels in the northwest that needed regular reprovisioning. In 1853, and again the following year, US Commodore Matthew Perry steamed into Edo Bay with a show of gunships and demanded the opening of Japan for trade and reprovisioning. The shōgunate had little option but to accede to his demands, for it was no match for Perry’s firepower. Presently a US consul arrived, and other Western powers followed suit. Japan was obliged to give ‘most favoured nation’ rights to all the powers, and lost control over its own tariffs.
The humiliation of the shōgunate, the nation’s supposed military protector, was capitalised upon by anti-shōgunal samurai in the outer domains of Satsuma (southern Kyūshū) and Chōshū (Western Honshū) in particular. A movement arose to ‘revere the emperor and expel the barbarians’ (sonnō jōi). However, after unsuccessful skirmishing with the Western powers, the reformers realised that expelling the barbarians was not feasible, but restoring the emperor was. Their coup, known as the Meiji (Enlightened Rule) Restoration, was put into effect from late 1867 to early 1868, and the new teenage emperor Mutsuhito (1852–1912), later to be known as Meiji, found himself ‘restored’, following the convenient death of his stubborn father Kōmei (1831–67). After some initial resistance, the last shōgun, Yoshinobu (1837–1913), retired to Shizuoka to live out his numerous remaining years peacefully. The shōgunal base at Edo became the new imperial base, and was renamed Tōkyō (eastern capital).
Mutsuhito did as he was told by those who had restored him, though they would claim that everything was done on his behalf and with his sanction. Basically, he was the classic legitimiser. His restorers, driven by both personal ambition and genuine concern for the nation, were largely leading Satsuma/Chōshū samurai in their early 30s. The most prominent of them was Itō Hirobumi (1841–1909), who was to become prime minister on no fewer than four occasions. Fortunately for Japan, they proved a very capable oligarchy.
Japan was also fortunate in that the Western powers were distracted by richer and easier pickings in China and elsewhere, and did not seriously seek to occupy or colonise Japan, though Perry does seem to have entertained such thoughts at one stage. Nevertheless, the fear of colonisation made the oligarchs act with great urgency. Far from being colonised, they themselves wanted to be colonisers, and make Japan a major power.
Under the banner of fukoku kyōhei (rich country, strong army), the young men who now controlled Japan decided on Westernisation as the best strategy – again showing the apparent Japanese preference for learning from a powerful potential foe. In fact, as another slogan oitsuke, oikose (catch up, overtake) suggests, they even wanted to outdo their models. Missions were sent overseas to observe a whole range of Western institutions and practices, and Western specialists were brought to Japan to advise in areas from banking to transport to mining.
In the coming decades Japan was to Westernise quite substantially, not just in material terms, such as communications and railways and clothing, but also, based on selected models, in the establishment of a modern banking system and economy, legal code, constitution and Diet, elections and political parties, and a conscript army.
Existing institutions and practices were disestablished where necessary. Daimyō were ‘persuaded’ to give their domainal land to the government in return for governorships or similar compensation, enabling the implementation of a prefectural system. The four-tier class system was scrapped, and people were now free to choose their occupation and place of residence. This included even the samurai class, phased out by 1876 to pave the way for a more efficient conscript army – though there was some armed resistance to this in 1877 under the Satsuma samurai (and oligarch) Saigō Takamori, who ended up committing seppuku when the resistance failed.
To help relations with the Western powers, the ban on Christianity was lifted, though few took advantage of it. Nevertheless numerous Western ideologies entered the country, one of the most popular being ‘self-help’ philosophy. This provided a guiding principle for a population newly liberated from a world in which everything had been prescribed for them. But at the same time, too much freedom could lead to an unhelpful type of individualism. The government quickly realised that nationalism could safely and usefully harness these new energies. People were encouraged to become successful and strong, and in doing so show the world what a successful and strong nation Japan was. Through educational policies, supported by imperial pronouncements, young people were encouraged to become strong and work for the good of the family-nation.
The government was proactive in many other measures, such as taking responsibility for establishing major industries and then selling them off at bargain rates to chosen ‘government-friendly’ industrial entrepreneurs – a factor in the formation of huge industrial combines known as zaibatsu. The government’s actions in this were not really democratic, but this was typical of the day. Another example is the ‘transcendental cabinet’, which was not responsible to the parliament but only to the emperor, who followed his advisers, who were members of the same cabinet! Meiji Japan was outwardly democratic but internally retained many authoritarian features.
The ‘state-guided’ economy was helped by a workforce that was well educated, obedient and numerous, and traditions of sophisticated commercial practices such as futures markets. In the early years Japan’s main industry was textiles and its main export silk, but later in the Meiji period, with judicious financial support from the government, it moved increasingly into manufacturing and heavy industry, becoming a major world shipbuilder by the end of the period. Improvement in agricultural technology freed up surplus farming labour to move into these manufacturing sectors.
A key element of Japan’s aim to become a world power with overseas territory was the military. Following Prussian (army) and British (navy) models, Japan soon built up a formidable military force. Using the same ‘gunboat diplomacy’ that Perry had used on the Japanese shōgunate, in 1876 Japan was able to force on Korea an unequal treaty of its own, and thereafter interfered increasingly in Korean politics. Using Chinese ‘interference’ in Korea as a justification, in 1894 Japan manufactured a war with China – a weak nation at this stage despite its massive size – and easily emerged victorious. As a result it gained Taiwan and the Liaotung peninsula. Russia tricked Japan into renouncing the peninsula and then promptly occupied it itself, leading to the Russo-Japanese War of 1904–05, from which Japan again emerged victorious. One important benefit was Western recognition of its interests in Korea, which it proceeded to annex in 1910.
By the time of Mutsuhito’s death in 1912, Japan was indeed recognised as a world power. In addition to its military victories and territorial acquisitions, in 1902 it had signed the Anglo-Japanese Alliance, the first ever equal alliance between a Western and non-Western nation. The unequal treaties had also been rectified. Western-style structures were in place. The economy was world ranking. The Meiji period had been a truly extraordinary half-century of modernisation. But where to now?
Mutsuhito was succeeded by his son Yoshihito (Taishō), who suffered mental deterioration that led to his own son Hirohito (1901–89) becoming regent in 1921.
On the one hand, the Taishō period (‘Great Righteousness’, 1912–26) saw continued democratisation, with a more liberal line, the extension of the right to vote and a stress on diplomacy. Through WWI Japan was able to benefit economically from the reduced presence of the Western powers, and also politically, for it was allied with Britain (though with little actual involvement) and was able to occupy German possessions in East Asia and the Pacific. On the other hand, using that same reduced Western presence, in 1915 Japan aggressively sought to gain effective control of China with its notorious ‘Twenty-One Demands’, which were eventually modified.
In Japan at this time there was a growing sense of dissatisfaction towards the West and a sense of unfair treatment. The Washington Conference of 1921–22 set naval ratios of three capital ships for Japan to five US and five British, which upset the Japanese despite being well ahead of France’s 1.75. Around the same time a racial equality clause that Japan proposed to the newly formed League of Nations was rejected. And in 1924 the US introduced race-based immigration policies that effectively targeted Japanese.
This dissatisfaction was to intensify in the Shōwa period (Illustrious Peace), which started in 1926 with the death of Yoshihito and the formal accession of Hirohito. He was not a strong emperor and was unable to curb the rising power of the military, who pointed to the growing gap between urban and rural living standards and accused politicians and big businessmen of corruption. The situation was not helped by repercussions from the World Depression in the late 1920s. The ultimate cause of these troubles, in Japanese eyes, was the West, with its excessive individualism and liberalism. According to the militarists, Japan needed to look after its own interests, which in extended form meant a resource-rich, Japan-controlled Greater East Asian Co-Prosperity Sphere that even included Australia and New Zealand.
In 1931 Japan invaded Manchuria on a pretext, and presently set up a puppet government. When the League of Nations objected, Japan promptly left the League. It soon turned its attention to China, and in 1937 launched a brutal invasion that saw atrocities such as the notorious Nanjing Massacre of December that year. Casualty figures for Chinese civilians at Nanjing vary between 340, 000 (some Chinese sources) and a ‘mere’ 20, 000 (some Japanese sources). Many of the tortures, rapes and murders were filmed and are undeniable, but persistent (though not universal) Japanese attempts to downplay this and other massacres in Asia remain a stumbling block in Japan’s relations with many Asian nations, even today.
Japan did not reject all Western nations, however, for it admired the new regimes in Germany and Italy, and in 1940 entered into a tripartite pact with them. This gave it confidence to expand further in Southeast Asia, principally seeking oil, for which it was heavily dependent on US exports. However, the alliance was not to lead to much cooperation, and since Hitler was openly talking of the Japanese as untermenschen (lesser beings) and the ‘Yellow Peril’, Japan was never sure of Germany’s commitment. The US was increasingly concerned about Japan’s aggression and applied sanctions. Diplomacy failed, and war seemed inevitable. The US planned to make the first strike, covertly, through the China-based Flying Tigers (Plan JB355), but there was a delay in assembling an appropriate strike force.
So it was that the Japanese struck at Pearl Harbor on 7 December that year, damaging much of the US Pacific Fleet and allegedly catching the US by surprise, though some scholars believe Roosevelt and others deliberately allowed the attack to happen in order to overcome isolationist sentiment and bring the US into the war against Japan’s ally Germany. Whatever the reality, the US certainly underestimated Japan and its fierce commitment, which led rapidly to widespread occupation of Pacific islands and parts of continental Asia. Most scholars agree that Japan never expected to beat the US, but hoped to bring it to the negotiating table and emerge better off.
The tide started to turn against Japan from the battle of Midway in June 1942, which saw the destruction of much of Japan’s carrier fleet. Basically, Japan had over-extended itself, and over the next three years was subjected to an island-hopping counterattack from forces under General Douglas MacArthur. By mid-1945 the Japanese, ignoring the Potsdam Declaration calling for unconditional surrender, were preparing for a final Allied assault on their homelands. On 6 August the world’s first atomic bomb was dropped on Hiroshima, with 90, 000 civilian deaths. On 8 August, Russia, which Japan had hoped might mediate, declared war. On 9 August another atomic bomb was dropped on Nagasaki, with another 75, 000 deaths. The situation prompted the emperor to formally announce surrender on 15 August. Hirohito probably knew what the bombs were, for Japanese scientists were working on their own atomic bomb and seem to have had both sufficient expertise and resources, though their state of progress is unclear.
Following Japan’s defeat a largely US occupation began under MacArthur. It was benign and constructive, with twin aims of demilitarisation and democratisation, and a broader view of making Japan an Americanised bastion against communism in the region. To the puzzlement of many Japanese, Hirohito was not tried as a war criminal but was retained as emperor. This was largely for reasons of expediency, to facilitate and legitimise reconstruction – and with it US policy. It was Americans who drafted Japan’s new constitution, with its famous ‘no war’ clause. US aid was very helpful to the rebuilding of the economy, and so too were procurements from the Korean War of 1950–53. The Occupation ended in 1952, though Okinawa was not returned till 1972 and is still home to US military bases. And Japan still supports US policy in many regards, such as in amending the law to allow (noncombatant) troops to be sent to Iraq.
The Japanese responded extremely positively in rebuilding their nation, urged on by a comment from the postwar prime minister Yoshida Shigeru that Japan had lost the war but would win the peace. Certainly, in economic terms, through close cooperation between a stable government and well organised industry, and a sincere nationwide determination to become ‘Number One’, by the 1970s Japan had effectively achieved this. It had become an economic superpower, its ‘economic miracle’ the subject of admiration and study around the world. Even the Oil Shocks of 1973 and 1979 did not cause serious setback.
By the late 1980s Japan was by some criteria the richest nation on the planet, of which it occupied a mere 0.3% in terms of area but 16% in terms of economic might and an incredible 60% in terms of real estate value. Some major Japanese companies had more wealth than many nations’ entire GNP.
Hirohito died in January 1989, succeeded by his son Akihito and the new Heisei (Full Peace) period. He must have ended his extraordinarily eventful life happy at his nation’s economic supremacy.
The so-called ‘Bubble Economy’ may have seemed unstoppable, but the laws of economics eventually prevailed and in the early 1990s it burst from within, having grown beyond a sustainable base. Though Japan was to remain an economic superpower, the consequences were nevertheless severe. Economically, Japan entered a recession of some 10 years, which saw almost zero growth in real terms, plummeting land prices, increased unemployment and even dismissal of managers who had believed they were guaranteed ‘lifetime’ employment. Socially, the impact was even greater. The public, whose lives were often based around corporations and assumed economic growth, were disoriented by the effective collapse of corporatism and the economy. Many felt displaced, confused and even betrayed, their values shaken. In 1993 the Liberal Democratic Party, in power since 1955, found itself out of office, though it soon recovered its position as a sort of resigned apathy seemed to set in among the public.
The situation was not helped by two events in 1995. In January the Kōbe Earthquake struck, killing more than 5000 people and earning the government serious criticism for failure to respond promptly and effectively. A few months later came the notorious sarin gas subway attack by the AUM religious group, which killed 12 and injured thousands. Many people, such as the influential novelist Murakami Haruki, saw the ability of this bizarre cult to attract intelligent members as a manifestation of widespread anxiety in Japan, where people had suddenly experienced the collapse of many of their core values and beliefs were now left on their own – a situation postmodernists term ‘the collapse of the Grand Narrative’.
The collapse of corporatism is reflected in increasing numbers of ‘freeters’ (free arbeiters), who do not commit to any one company but move around in employment, and ‘neets’ (not in employment or education or training). More people are now seeking their own way in life, which has resulted in greater diversity and more obvious emergence of individuality. On the one hand, this has led to greater extremes of self-expression, such as outlandish clothes and hairstyles (and hair colours) among the young. On the other hand, there’s a greater ‘Western-style’ awareness of the rights of the individual, seen in the recently introduced privacy and official information laws. Direct control by government has also loosened, as seen in the 2004 corporatisation of universities.
The economy started to recover from around 2002, thanks in part to increased demand from China, and is now steady around the 2% to 3% per annum growth mark. The year 2002 was also marked by a successful co-hosting of the football World Cup with rivals Korea. However, relations with Asian nations are still far from fully harmonious. Recent bones of contention include the continued appearance of history textbooks that downplay atrocities such as Nanjing, and controversial visits by Prime Minister Koizumi Junichirō (in office 2001–06) to Yasukuni Shrine to honour Japanese war dead, including war criminals.
There are other worries for Japan. One is that it is the world’s most rapidly ageing society, with the birth rate declining to a mere 1.25 per woman, and with its elderly (65 years plus) comprising 21% of the population while its children (up to 15 years) comprise just 13%. This has serious ramifications economically as well as socially, with a growing ratio of supported to supporter, and increased pension and health costs. Along with many ageing Western nations, Japan is doing its best (for example, by introducing nursing insurance schemes), but there is no easy solution in sight, and there are serious calls to redefine ‘elderly’ (and concomitant retirement expectations) as 75 years of age rather than 65.
Other concerns include juvenile crime and a growing problem of Social Anxiety Disorder in young people, which can lead to serious withdrawal (hikikomori) from everyday life. Internationally, the threat from nuclear-capable North Korea, with which Japan has had a particularly troubled relationship, presents a major worry.
Some Japanese were also concerned about there being no male heir to the throne, but in September 2006 Princess Kiko gave birth to Prince Hisahito and allayed those fears. Polls show that most Japanese would have been happy with a reigning empress anyway. That same month Koizumi was followed as prime minister by the 52-year-old Abe Shinzō, the first Japanese prime minister to be born postwar. It remains to be seen how the country will fare under his leadership, for which public support seems somewhat limited as 2007 unfolds. |
To say that spectrum is critically important is an understatement. The relevant industries generate trillions of dollars in annual revenue.
Because spectrum is so important to the public welfare, regulations established by government agencies are necessary. In the United States, the rules that govern the radio spectrum are laid out in Title 47 of the Code of Federal Regulations. These rules include measures to protect environmental and other types of resources, and compliance with them is mandatory.
Standards are produced by standards-developing organizations (SDOs) and typically refer to the operation of a particular product. Compliance with standards for electromagnetic interference is mandatory, but compliance with standards such as Wi-Fi and LTE is entirely voluntary. However, the successful standards have created huge market opportunities, and compliance with these standards has become, in most cases, a de facto requirement for commercial success. Prior to selling equipment, equipment manufacturers must receive certification that their devices comply with government (in the United States, FCC) regulations. Thus, these devices undergo[KZ1] [KZ2] interoperability testing, as established by the relevant standard. Devices are fielded only after they have passed these tests. Standards may include performance criteria that incorporate relevant regulations, and in these cases, compliance with the standard can mean compliance with the regulations.
Currently, the FCC first “allocates” a band of frequencies, specifying power limits and, in some cases, determining the specific service to be used. There are several different licensing schemes: (1) exclusive, usually for a limited geographic area; (2) non-exclusive; (3) unlicensed; and (4) special, meaning a site-based license. For example, cellular service providers have exclusive licenses. Wi-Fi is using unlicensed and, more recently, non-exclusively licensed spectrum, or licensed spectrum that is otherwise unoccupied. Unlicensed devices must accept whatever interference they receive and must not cause harmful interference. Exclusive licenses can be obtained primarily through competitive bidding and guarantee license holders the right to call federal marshals to tear down transmitters that cause “harmful interference” to the license holder.
The central technical concept is what constitutes “unacceptable interference.” Interference to a receiver depends mainly on the distance from that receiver to the unwanted transmitter compared to the distance to the intended transmitter, and the power levels of these transmitters. There is no such thing as interference-free wireless transmission, or rather reception. Every receiver must tolerate some level of interference.
For a long time, spectrum was considered a resource, similar to land or oil, albeit “infinitely renewable,” with several dimensions—time, frequency, and location. The licensing system is effectively parceling spectrum in frequency, in location, and, more recently, in time. The problem with this is that signals can overlap in all three of these dimensions and still be non-interfering, using techniques such as spread-spectrum and ultra-wide band (UWB). Since two people cannot plough the same plot of land at the same time, the resource analogy has limitations.
Therefore, the technology evolution not only leads to new regulations and standards but also improves our understanding of “spectrum” and “interference,” the very concepts with which these regulations and standards operate.
Todor Cooklev is Harris Professor of Wireless Communication and Applied Research at Purdue University in Fort Wayne, Indiana. He has contributed to the development of a number of communications standards, including Bluetooth, DSL, Wi-Fi, cellular, and military radio systems, serving at times in leadership positions in standardization organizations such as ITU-T, IEEE 802, and 3GPP. His research interests include most aspects of wireless standards. Dr. Cooklev has contributed to more than 100 publications. |
Dwarfism Special Needs Factsheet
What Teachers Should Know
Dwarfism is a growth disorder. The most common type is called achondroplasia. Typically, adults with dwarfism are 4 feet 10 inches or under.
Achondroplasia commonly results in:
- shortened upper arms and legs and a relatively long torso
- shortened hands and fingers
- larger head and a prominent forehead
- flattened bridge of the nose
Physical problems related to dwarfism can include:
- reduced muscle tone and delayed motor skill development
- breathing problems
- curvature of the spine, such as scoliosis
- bowed legs
- limited joint flexibility and arthritis
- lower back pain or leg numbness
- recurring ear infections and risk of hearing loss
- crowded teeth
Dwarfism does not affect intellectual abilities. There is no cure for dwarfism, but most little people live long, fulfilling lives. Little people go to school, have careers, marry, and raise kids, just like their average-size peers.
Students with dwarfism may:
- need extra time getting to classes due to mobility issues
- need extra time on tests if manual dexterity is an issue
- miss assignments or class time due to medical appointments
- need step stools for bathrooms, water fountains, classrooms, and other areas
- need additional accommodations in the classroom and around school
- feel anxious, depressed, or embarrassed by their size
- be at risk for teasing or bullying
- benefit from an individualized education program (IEP) or 504 education plan to accommodate educational and physical needs
What Teachers Can Do
Your classroom can offer a welcoming and productive learning environment by providing adaptive accommodations where necessary. Students with dwarfism should be able to reach everything their classmates can reach. And remember to treat your students with dwarfism according to their age, not their size. Unless the student has a learning disability, educational expectations should not differ from those of other students.
Students with dwarfism may be limited in the types of exercises and activities that they can do, but it’s very important that they participate in safe physical activities to help stay fit.
Students with dwarfism may feel awkward or embarrassed around other students. Educating yourself and students about dwarfism can decrease bullying and increase self-confidence for students with dwarfism.
Reviewed by: Mary L. Gavin, MD
Date reviewed: April 2015 |
National Register of Historic Places
The National Register of Historic Places is the primary vehicle for identifying and protecting historic resources in the United States. Established by the Historic Sites Act of 1935 and expanded by the National Historic Preservation Act, the National Register serves as the official list of historic resources at the national level. It includes districts, sites, buildings, structures and other objects that are significant in American history, architecture, archaeology, engineering and culture. While properties listed on the National Register may have national significance, most properties listed on the Register have state or local significance. There are more than 80,000 properties listed on the National Register, which includes information on more than 1.4 million resources.
The National Register of Historic Places includes a special category of properties called National Historic Landmarks (NHLs). These properties, nearly 2,500 in number, merit distinction because of their exceptional importance to the nation as a whole. Nominations are prepared by National Park Service (NPS) staff and designated by the Secretary of the Interior upon review by the NPS Advisory Board. Regulations setting forth the criteria and procedures for the listing of properties as NHLs are published in the Code of Federal Regulations at 36 C.F.R. Part 65.
In addition to buildings, the National Register includes sites, districts, structures and objects. National Register properties include, for example, such diverse places as the Highland Lighthouse in Cape Cod, Massachusetts, various structures associated with the space program at the Kennedy Space Center in Florida, the Belle of Louisville, a river steamboat in Kentucky, and archeological sites such as the Pueblo Grande Ruin in Arizona.
Maintaining the Register
The National Register is maintained by the Secretary of the Interior through the NPS. The Keeper of the National Register, an employee of the Park Service, is responsible for the listing of properties and official determination of eligibility for listing on the Register. However, the listing process usually begins with the State Historic Preservation Officer (SHPO), who is charged under the National Historic Preservation Act with the identification and nomination of eligible properties to the National Register and the administration of applications for listing historic properties on the National Register.
Searching the Register
The National Park Service maintains a National Register searchable database. The database can be searched by name, architect, significant person, multiple property submission name, location, Federal agency and theme. In addition, all National Register listings are posted in the Federal Register.
Other Historic Registers Compared
State and local governments maintain independent registers or lists of historic properties. These lists may include properties that are also listed on the National Register. State historic registers may be viewed by contacting your SHPO. The NPS posts a list of SHPOs. Properties on a local register are designated pursuant to procedures set forth under a local historic preservation ordinance. On the international level, UNESCO maintains a comprehensive list of world heritage sites, and the World Monuments Fund, a non-profit organization, maintains a list of endangered sites.
Listing Properties on the National Register
The SHPO is the principal entity charged with the responsibility of nominating properties for listing on the National Register. Other public entities, however, can play a role in the listing process. Under the National Historic Preservation Act, federal agencies must establish a preservation program, in consultation with the Secretary of the Interior, which identifies and nominates properties for listing on the National Register. Local governments that have been "certified" by a SHPO may prepare a report on a property's eligibility for listing and recommend against such listing in individual cases. Finally, officially-recognized tribes that have been designated by the National Park Service as Tribal Historic Preservation Officers (THPOs) may assume the duties of a SHPO, including National Register nominations. Other tribes may work with a SHPO on matters occurring on or affecting historic properties on their land.
Private individuals also play a key role in the listing process. Many properties are included in the National Register as a result of the efforts of individuals seeking official recognition. Individuals seeking National Register status must file an application with the SHPO, which includes documentation supporting the property's eligibility. The process takes a minimum of 90 days and can take longer. Applications for National Register listing may be obtained from the SHPO or online through the National Park Service.
The NPS has developed a series of publications to assist private individuals in the completion of National Register applications. These include: "How to Apply the National Register Bulletin," National Register Bulletin (NPS. 1990, rev. 2002); "How to Complete the National Register Form," National Register Bulletin (NPS 1997); "How to Complete the National Register Multiple Property Documentation Form," National Register Bulletin (NPS 1991, rev. 1999); and "Researching a Historic Property," National Register Bulletin (NPS 1991, rev. 1998). These publications and others are posted on the National Park Service's website. Many SHPOs provide lists of consultants who specialize in the preparation of National Register nominations. The Maryland Historical Trust, for example, posts a list of consultants on its website.
Criteria for Listing on the National Register
The Secretary of the Interior has promulgated criteria and procedures for evaluating properties for listing in the National Register. See 36 C.F.R. Part 60. The criteria are set forth in § 60.4.
Criteria for Evaluation
The quality of significance in American history, architecture, archeology, engineering, and culture is present in districts, sites, buildings, structures, and objects that possess integrity of location, design, setting, materials, workmanship, feeling, and association, and:
- That are associated with events that have made a significant contribution to the broad patterns of our history; or
- That are associated with the lives of persons significant in our past; or
- That embody the distinctive characteristics of a type, period, or method of construction, or that represent the work of a master, or that possess high artistic values, or that represent a significant and distinguishable entity whose components may lack individual distinction; or
- That have yielded or may be likely to yield, information important in prehistory or history.
Ordinarily cemeteries, birthplaces, graves of historical figures, properties owned by religious institutions or used for religious purposes, structures that have been moved from their original locations, reconstructed historic buildings, properties primarily commemorative in nature, and properties that have achieved significance within the past 50 years shall not be considered eligible for the National Register. However, such properties will qualify if they are integral parts of districts that do meet the criteria or if they fall within the following categories:
- A religious property deriving primary significance from architectural or artistic distinction or historical importance; or
- A building or structure removed from its original location but which is primarily significant for architectural value, or which is the surviving structure most importantly associated with a historic person or event; or
- A birthplace or grave of a historical figure of outstanding importance if there is no appropriate site or building directly associated with his or her productive life; or
- A cemetery which derives its primary importance from graves of persons of transcendent importance, from age, from distinctive design features, or from association with historic events; or
- A reconstructed building when accurately executed in a suitable environment and presented in a dignified manner as part of a restoration master plan, and when no other building or structure with the same association has survived; or
- A property primarily commemorative in intent if design, age, tradition, or symbolic value has invested it with its own exceptional significance; or
- A property achieving significance within the past 50 years if it is of exceptional importance.
Procedures governing the nomination of properties for listing in the National Register, changes and revisions to such listings, and the removal of listed properties are set forth at 36 C.F.R. § 60.5 through § 60.15. Notice of pending nominations is provided by the SHPO and published in the Federal Register. In addition, the SHPO must notify private property owners once their properties are listed in the Register.
A property owner may prevent the inclusion of his or her property in the National Register of Historic Places by formally objecting to its listing. In the case of historic districts, a majority of property owners must object to the designation. Objections must be made through the submission of a notorized statement signed by all objecting property owners to the SHPO. Keep in mind that objections to listing will not prevent federal agency reviews under Section 106 of the National Historic Preservation Act, which addresses both properties listed, and eligible for listing, on the National Register.
Legal Impact of National Register listing on Private Property
The National Register serves as a planning tool for federal agencies. Its essential purpose is to identify, rather than to protect the historical and cultural resources of our nation. As a result, listing on the National Register is primarily honorific, meaning that it does not impose substantive restraints on how a private property owner may use his or her property. Indeed, a common misperception is that properties listed on the National Register are required to be preserved.
That being said, National Register listings can affect private property owners in different ways. Section 106 of the National Historic Preservation Act, for example, requires federal agencies to consider the impact of their actions on properties listed or eligible for listing on the National Register. If an individual or entity owns land that is located in the environs of such property, then any activities conducted on that land will be subject to review if those activities are federally funded, federally licensed, or otherwise involve some form of federal undertaking. On the other hand, National Register listings can be helpful to property owners seeking to prevent actions that would adversely affect their properties. National register property owners can seek federal agency review under Section 106 to address the impact of proposed actions that could have a negative impact on their properties, such as a highway expansion or construction of a cell tower.
Also keep in mind that the National Register can be used to identify properties for protection under a state or local preservation program. National Register and National Register-eligible listings may trigger review under a state environmental or preservation statute that operates in manner similar to a Section 106 review. It can also serve as the basis for regulation under a local preservation ordinance.
Economic Benefits of National Register Listings
Many property owners actively seek National Register status because of the benefits associated with such listing. In addition to conferring official recognition, which can be helpful, for example, in attracting tourists to stay in a bed & breakfast, stroll down an historic main street, or visit a local house museum, National Register listings are used by the federal government as a basis for qualifying a property for federal assistance in the form of favorable tax incentives, such as a 20 percent, rehabilitation tax credit and the charitable contribution tax deduction for the donation of a preservation easement and grants and loans. Properties listed on the National Register may also be eligible for state income tax incentives and property tax relief and may qualify for financial assistance through state-administered programs funded by the Congressionally-appropriated
National Historic Landmarks
Owners of National Historic Landmark (NHL) properties enjoy special benefits. NHL owners may be eligible for limited grants from the Historic Preservation Fund. Owners may also receive a bronze plaque identifying the name, landmark status, and date of designation at no cost. In addition, NHLs enjoy added protection from federal agency actions. Specifically, section 110(f) of the National Historic Preservation Act calls for Federal agencies to undertake "such planning and actions as may be necessary to minimize harm to such Landmark."
Layperson's Guide to Preservation Law: Federal, State and Local Laws Governing Historic Resources
First published in 1997, this booklet provides a concise and comprehensible guide to federal, state and local laws governing historic resource protection. The 2008 edition includes updated information on transportation issues, eminent domain, easements, the American's with Disabilities Act, and the regulation of historic religious properties. |
Esl Pre Intermediate Reading Comprehension Worksheets Pdf – Reading Comprehension Worksheets provide your students with the framework they require in order to read with ease. The school has tests that assess students in various domains of knowledge. Teachers utilize these tests as a instrument for measuring. These tests are based upon actual situations, and students are required to interpret and analyze the information they receive. Teachers aren’t sure of their decisions on what they need to teach and how they should explain it. There are a variety of tools that could be used to aid teachers as well as for students.
Worksheets provide a great method to manage ideas, organize students, and help answer questions during class. There are numerous worksheets that covers the following skills such as Identification, Reading Comprehension, Reading Speed Writing, Listening, and Identification. Each topic includes multiple-choice or true/false choices. To help students identify the topic, it is possible to provide your students with diagrams to decipher the central idea from multiple choices, which will help them understand the content. Your students must read the passage, and then answer questions that pertain to every specific aspect of the passage.
Esl Pre Intermediate Reading Comprehension Worksheets Pdf
The Reading Comprehension Worksheet offers the student three topics to read and analyze, as well as write at the end of each. This is an example of how you would create the worksheet in your class. The subjects are: English Vocabulary, Word Knowledge, Reading Comprehension and Listening. It is required to submit an example of a sentence (containing every word you believe the passage may contain) in addition to a definition the question, and then an explanation of each topic.
The second category is nonfiction reading comprehension worksheets to teach the fundamentals of reading. These include word identification activities for word comprehension, sentences, paragraph structures writing exercises, as well as essay questions. An ideal format for nonfiction reading comprehension worksheets is usually one-to-one, where students can choose the right answer from a selection. The topics could include math-related problems Nature facts, short stories.
The final category, the reading comprehension worksheets for more advanced levels, more like a test than it is of content. In this class you will have to cover several advanced-level concepts, and then use multiple choice for selecting answers from a multiple-choice area. Additionally, you will provide multiple answers that are correct wrong “don’t know”, or “taken”. In the majority of topics of the advanced level reading comprehension exercises, you will provide at minimum only one right answer. Advanced reading comprehension exercises typically comprise words that are used only once. Most often, advanced-level reading comprehension worksheets typically cover 500 words or less So you won’t find too many options in these tasks.
The fourth class, which includes reading comprehension worksheets for kids, is not about reading in any way. Instead, it is focused on how to discern the copyright when you read published material, particularly if the publication is in a classroom setting. Many written documents that are copyrighted have been designed for sale as e-books that are then distributed through download from the Internet, purchased on CD, or published as books. A lot of students at school or at home do not get access to Internet or the Internet, and therefore do not know how to search out published works without using an online dictionary.
There are also worksheets on reading comprehension for students who are already at the foundational level of the subject, and who are ready to advance towards more challenging topics. They typically consist of multiple-choice questions, or free write-ins, and generally longer than the ones that are taught in the class. Again, the format is distinct, and worksheets designed for these classes are usually handed out at the beginning or end of the class in order to teach students a range of concepts. In certain instances, these tests also include an open-response portion that gives students the opportunity to evaluate the work they have just studied, instead of writing their answers verbally. Students have the chance to reinforce the concepts they’ve been taught, and to demonstrate their knowledge of research in a engaging manner.
Of course, the exact criteria when choosing a reading comprehension worksheet apply to any other type of the test. It is crucial to choose tests that are similar to writing promptsbut leave enough room for individual testing. It is also essential to choose tests that will not cause a student to feel constrained by the instructor or textbook. Also, it is important to select tests that test the student’s ability to comprehend, rather than just showing their ability to solve problems. In taking all these factors into consideration students will be sure to have workbooks for in their class which will prove effective and interesting. |
POV: pepe popo check -gnome
it will stay at rest until acted upon by an outside force
Force = Mass x Acceleration Force = 5 x 2 Force = 10N (Newtons)
A mass doesn't produce a force by accelerating. It needs a force to make it accelerate. The force it needs is (mass) x (acceleration). In order to accelerate a 10 kg mass at 2.5 meters per sec2, you need a force of 25 newtons.
Since the book is not accelerating, we know that the net force on it is zero.
An accelerating force is the force which causes accelerated motion.
force=mass*accelaration force/mass=accelaration 4/2=2m/s2
F = m A = (2) (40) = 80 newtons
Force = mass * acceleration ( acceleration's unit is m/s2 ) Force = (10 kg)(4 m/s2) = 40 Newtons ==========
6 newtons, so it keeps accelerating. As it does, the air resistance increases until it reaches 10 newtons so then the net force is zero and then the fall continues at constant speed (the terminal velocity).
Any force can be expressed in Newtons.
Newtons is a unit for force. |
About spongy moth
This moth lays hundreds of eggs in a single mass. The egg masses then hatch, releasing hundreds of hungry caterpillars. These caterpillars can strip the leaves from entire trees, devastating stands of trees.
The spongy moth mostly lives in Europe, Russia, China, Korea, and Japan. It turns up in all sorts of places, like the western coast of North America. Although it has been eradicated in many places, it has established on the east coast of North America.
In 2003, we found the moth in Hamilton. It was declared eradicated in 2005. We have a surveillance network monitoring for this moth near our borders.
Global distribution of spongy moth
Why this is a problem for New Zealand
The caterpillars have a broad host range. We know the caterpillar feeds on many tree species that are common in our towns and cities, such as oak, fruiting trees, and birch.
In large numbers, the caterpillars are a public nuisance. They leave large amounts of droppings and have tiny stinging hairs that cause an itchy or painful rash.
We don't know what impact this caterpillar could have on New Zealand forests. Some native trees belong to the same groups as trees that are affected overseas. Some forms of spongy moth even have a taste for pine trees.
How it could get here
In some types of spongy moth, the female moths can fly between 1km and 10km. They lay their eggs on all kinds of surfaces, like tree trunks, rocks, buildings, fences, vehicles, shipping containers, and ships.
Vehicles and ships are the most likely ways for the egg masses to arrive in New Zealand. MPI has strict measures in place to limit the chances of spongy moths making it through the border.
When is it a problem?
Spongy moth lays its eggs during the Northern Hemisphere's summer. Our challenge is identifying which ships and cargo might have been nearby when female moths were laying eggs.
Where you might find it
You may find egg masses on items that have been recently imported, like vehicles. If it were present in New Zealand, you may find the caterpillars in spring on the new growth of deciduous trees (trees that lose their leaves in autumn).
How to identify the spongy moth
The egg masses
The egg masses are covered with fine hairs that are light brown or tan. They are oval and can range in size, up to 4cm by 2cm.
The caterpillars are hairy and their colour varies. There are many species of hairy caterpillars in New Zealand. The key difference is the coloured dots along its back. The spongy moth has pairs of blue dots on the front third of its body and pairs of red dots on the back two-thirds.
The moths only live for a week. They vary in colour from a mottled white to a mottled brown. They're not as easy to identify as the caterpillar or the egg masses.
If you think you've found this pest
We don't have any species that lay hairy egg masses in New Zealand. If you see any:
- photograph them
- note the location
- call 0800 80 99 66
If you've found a hairy caterpillar with red and blue spots:
- photograph it
- capture it (if you can but watch out for the stinging hairs)
- call 0800 80 99 66
- This information is a summary of spongy moth's global distribution and potential impacts on New Zealand.
- Spongy moth was formerly known as gypsy moth. Its name was changed in 2022. |
Binary numbers are numbers which are expressed in base 2 notation, rather than the base 10 we are used to. Consider how we normally count in base 10 - when we reach 10, we have to add an extra number to express it. Similarly, in base 2, when we reach 1, the next number has to be expressed by adding a new number to it. So while 1 is equivalent to 1, 10 is equivalent to 2.
You can convert any binary numbers to decimal using the calculator below.
You've probably used
parseInt? If you use the second argument of
parseInt, you can set the base:
let x = parseInt('10101', 2); console.log(x); // Returns 21
Most likely, you'll want to use base 2, but you can use any base you like here. So
parseInt('10010', 3) will convert a base 3 number to a decimal too. This is a pretty useful and little used
As mentioned previously, you can calculate a binary value in decimal when you consider that you can only ever go as high as
1 in binary, just as you can only ever go as high as
9 in decimal. So as in decimal, when you reach
9, you have to add another number to represent
10, in binary, when you reach
1, you have to add another number to represent
2 - so
The easiest way to convert a binary number to a decimal is to understand that each number in a binary can be represented like so:
BINARY: 1 0 1 0 1 0 1 DECIMAL: 64 32 16 8 4 2 1
All we have to do to convert a binary number to a decimal, is to know that each number can be represented in binary as a decimal number which increases by a multiple of 2 each time. So the last number, is
1, and then the next is
2, and the next is
4, and so on.
To convert a binary like
1010101 to decimal, we multiply each number by its decimal representation. So we can do:
1 * 1- giving us 1
0 * 2- giving us 0
1 * 4- giving us 4
0 * 8- giving us 0
1 * 16- giving us 16
0 * 32- giving us 0
1 * 64- giving us 64
Then we add them all up! So
64 - giving us 85! |
Researchers are investigating whether stem cells may grow better in zero gravity conditions than here on Earth.
Researcher Dhruv Sareen’s stem cells have made their way to the International Space Station (ISS) on a supply ship sent by the Cedars-Sinai Medical Center in Los Angeles, which is trying to find new ways to mass produce stem cells. The stem cells are induced pluripotent stem cells, meaning they can differentiate themselves into almost any other kind of cell found in the body, and have the potential to treat various diseases.
So far, the Food and Drug Administration (FDA) has only approved blood-forming stem cell treatments for patients with blood disorders like lymphoma, with the stem cells being sourced from umbilical cord blood. The stem cells being sent to space could be used for futuristic therapies to treat conditions like Type 1 diabetes, macular degeneration, Parkinson’s disease, or damage from heart attacks.
“With current technology right now, even if the FDA instantly approved any of these therapies, we don’t have the capacity to manufacture [what’s needed],” said Jeffrey Millman, a biomedical engineering expert at Washington University in St. Louis.
Stem cells are normally grown in large bioreactors on Earth, where they need to be continually stirred to avoid clumping together and sinking to the bottom of the tank. The stress of these conditions causes many cells to die.
“In zero G, there’s no force on the cells, so they can just grow in a different way,” said Clive Svendsen, executive director of Cedars-Sinai’s Regenerative Medicine Institute.
NASA is funding the Cedars-Sinai team’s experiment to grow stem cells in space for four weeks. The team will also run the same experiment simultaneously on Earth before a SpaceX capsule returns the space experiment’s cells to Earth for the team to analyze. According to Svenden, the impact of producing billions of these cells in orbit “could be huge.”
2D array of electron and nuclear spin qubits opens new frontier in quantum science – Phys.org
By using photons and electron spin qubits to control nuclear spins in a two-dimensional material, researchers at Purdue University have opened a new frontier in quantum science and technology, enabling applications like atomic-scale nuclear magnetic resonance spectroscopy, and to read and write quantum information with nuclear spins in 2D materials.
As published Monday (Aug. 15) in Nature Materials, the research team used electron spin qubits as atomic-scale sensors, and also to effect the first experimental control of nuclear spin qubits in ultrathin hexagonal boron nitride.
“This is the first work showing optical initialization and coherent control of nuclear spins in 2D materials,” said corresponding author Tongcang Li, a Purdue associate professor of physics and astronomy and electrical and computer engineering, and member of the Purdue Quantum Science and Engineering Institute.
“Now we can use light to initialize nuclear spins and with that control, we can write and read quantum information with nuclear spins in 2D materials. This method can have many different applications in quantum memory, quantum sensing, and quantum simulation.”
Quantum technology depends on the qubit, which is the quantum version of a classical computer bit. It is often built with an atom, subatomic particle, or photon instead of a silicon transistor. In an electron or nuclear spin qubit, the familiar binary “0” or “1” state of a classical computer bit is represented by spin, a property that is loosely analogous to magnetic polarity—meaning the spin is sensitive to an electromagnetic field. To perform any task, the spin must first be controlled and coherent, or durable.
The spin qubit can then be used as a sensor, probing, for example, the structure of a protein, or the temperature of a target with nanoscale resolution. Electrons trapped in the defects of 3D diamond crystals have produced imaging and sensing resolution in the 10–100 nanometer range.
But qubits embedded in single-layer, or 2D materials, can get closer to a target sample, offering even higher resolution and stronger signal. Paving the way to that goal, the first electron spin qubit in hexagonal boron nitride, which can exist in a single layer, was built in 2019 by removing a boron atom from the lattice of atoms and trapping an electron in its place. So-called boron vacancy electron spin qubits also offered a tantalizing path to controlling the nuclear spin of the nitrogen atoms surrounding each electron spin qubit in the lattice.
In this work, Li and his team established an interface between photons and nuclear spins in ultrathin hexagonal boron nitrides.
The nuclear spins can be optically initialized—set to a known spin—via the surrounding electron spin qubits. Once initialized, a radio frequency can be used to change the nuclear spin qubit, essentially “writing” information, or to measure changes in the nuclear spin qubits, or “read” information. Their method harnesses three nitrogen nuclei at a time, with more than 30 times longer coherence times than those of electron qubits at room temperature. And the 2D material can be layered directly onto another material, creating a built-in sensor.
“A 2D nuclear spin lattice will be suitable for large-scale quantum simulation,” Li said. “It can work at higher temperatures than superconducting qubits.”
To control a nuclear spin qubit, researchers began by removing a boron atom from the lattice and replacing it with an electron. The electron now sits in the center of three nitrogen atoms. At this point, each nitrogen nucleus is in a random spin state, which may be -1, 0, or +1.
Next, the electron is pumped to a spin-state of 0 with laser light, which has a negligible effect on the spin of the nitrogen nucleus.
Finally, a hyperfine interaction between the excited electron and the three surrounding nitrogen nuclei forces a change in the spin of the nucleus. When the cycle is repeated multiple times, the spin of the nucleus reaches the +1 state, where it remains regardless of repeated interactions. With all three nuclei set to the +1 state, they can be used as a trio of qubits.
At Purdue, Li was joined by Xingyu Gao, Sumukh Vaidya, Peng Ju, Boyang Jiang, Zhujing Xu, Andres E. Llacsahuanga Allcca, Kunhong Shen, Sunil A. Bhave, and Yong P. Chen, as well as collaborators Kejun Li and Yuan Ping at the University of California, Santa Cruz, and Takashi Taniguchi and Kenji Watanabe at the National Institute for Materials Science in Japan.
“Nuclear spin polarization and control in hexagonal boron nitride” is published in Nature Materials.
Tongcang Li, Nuclear spin polarization and control in hexagonal boron nitride, Nature Materials (2022). DOI: 10.1038/s41563-022-01329-8. www.nature.com/articles/s41563-022-01329-8
2D array of electron and nuclear spin qubits opens new frontier in quantum science (2022, August 15)
retrieved 16 August 2022
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
This is when our Sun will die! Astonishing study reveals all – HT Tech
The Sun’s life may be coming to an end. Here’s this study says our Sun will die.
Those of us who have gone through a mid-life crisis know that it is a period of really big emotional turmoil. Everyone who reaches a certain age goes through it. Now, it has been revealed that not even massive celestial bodies like the Sun are safe from it. And yes, that also indicates that our Sun will die too.
A new study by the European Space Agency (ESA) has revealed that the Sun has entered its middle age, estimated to be around 4.57 billion years. It seems that the Sun is also going through a mid-life crisis with frequent Solar Flares, Coronal Mass Ejections (CMEs) and Solar Storms. The study was conducted with the help of data collected by the Gaia spacecraft.
Currently, the Sun is at the peak of its 11-year solar cycle, which has resulted in frequent solar flares, coronal mass ejections and solar storms. As the cycle ends, the frequency of these phenomena will decrease.
As the Sun gets older, the hydrogen in the Sun’s core will run out and the Sun will turn into a giant red star, lowering its surface temperature and cooling off. According to the ESA, as the Sun will reach the end of its life cycle, it will become a dim white dwarf star.
Orlagh Creevey from Observatoire de la Côte d’Azur, France searched through the data by studying some of the oldest stars in the Milky Way Galaxy with surface temperatures between 3000K and 10,000K. Orlagh said, “We wanted to have a really pure sample of stars with high precision measurements.”
The study concluded that the Sun will reach its peak temperatures nearly 8 billion years into the future after which it will lower its surface temperature and increase its size.
Orlagh said, “If we don’t understand our own Sun and there are many things we don’t know about it how can we expect to understand all of the other stars that make up our wonderful galaxy.”
NASA, on the other hand, had earlier sad in its report, “When it starts to die, the Sun will expand into a red giant star, becoming so large that it will engulf Mercury and Venus, and possibly Earth as well. Scientists predict the Sun is a little less than halfway through its lifetime and will last another 5 billion years or so before it becomes a white dwarf.”
Investigating the effect of N-doping on carbon quantum dots structure, optical properties and metal ion screening | Scientific Reports – Nature.com
Carbon quantum dots (CQDs) derived from biomass, a suggested green approach for nanomaterial synthesis, often possess poor optical properties and have low photoluminescence quantum yield (PLQY). This study employed an environmentally friendly, cost-effective, continuous hydrothermal flow synthesis (CHFS) process to synthesise efficient nitrogen-doped carbon quantum dots (N-CQDs) from biomass precursors (glucose in the presence of ammonia). The concentrations of ammonia, as nitrogen dopant precursor, were varied to optimise the optical properties of CQDs. Optimised N-CQDs showed significant enhancement in fluorescence emission properties with a PLQY of 9.6% compared to pure glucose derived-CQDs (g-CQDs) without nitrogen doping which have PLQY of less than 1%. With stability over a pH range of pH 2 to pH 11, the N-CQDs showed excellent sensitivity as a nano-sensor for the highly toxic highly-pollutant chromium (VI), where efficient photoluminescence (PL) quenching was observed. The optimised nitrogen-doping process demonstrated effective and efficient tuning of the overall electronic structure of the N-CQDs resulting in enhanced optical properties and performance as a nano-sensor.
The demand for high-performance carbon quantum dots (CQDs) with a range of applications, including sensing has been steadily increasing. However, the synthesis of CQDs continues to face challenges including high costs, lengthy multistep processes, and the use of hazardous substances1,2. Recently, biomass-derived CQDs have attracted considerable attention, and are considered as an optimal and green approach to prepare efficient CQDs. Biomass and biomass waste (agriculture product, agricultural residue, municipal solid waste etc.) are abundant, high in carbon content (45–55%), and are an environmentally friendly renewable resource3. Therefore, the utilisation of biomass as carbon resources for nanomaterial synthesis is an eco-friendly process and expected to reduce the total synthetic cost4. Although a broad range of biomass materials have been employed in producing CQDs, generally, these synthetic routes faced problems associated with poor control of the CQDs particle size, quality, and homogeneity of the product5. In addition, the CQDs synthesised from biomass or biomass waste, commonly possess poor optical properties and a low PLQY. The doping of heteroatom such as (N, P, S) is one of the most common methods to improve the optical properties of biomass-derived CQDs6,7. However, the questions related to the origin of the optical improvement with optimised dosing of these heteroatoms still need to be answered. Furthermore, in most conventional methodologies, these doping processes result in a longer synthesis time and higher energy consumption8.
In this work, the continuous hydrothermal flow synthesis (CHFS) which is primarily water-based was employed; thus, it is considered the greenest and most promising synthesis method for making CQDs. Notably, the CHFS allows designing or tailoring of the nanoparticles for specific functions based on the nucleation and surface functional processes. The comparison between CHFS and the traditional hydrothermal process revealed that the CHFS consumed less energy and time, while producing a highly homogenous quality product9. Moreover, the continuous hydrothermal process can be employed in multi-purposes such as controlling the nucleation to control the particle size and the addition of surfactant coating or dopant without further post-treatments10. In this paper, we report the use of CHFS process, to successfully synthesise N-doped carbon quantum dots (N-CQDs) from glucose which is an abundant, readily available, cost-effective biomass carbon source; and ammonia is used as a nitrogen dopant. Synthesised N-CQDs with different concentrations of ammonia were used to explore the effect of the concentration of nitrogen dopant on the optical characteristics of CQDs. A range of characterisation techniques were employed to investigate the origin of the optical enhancement. The performance of the N-CQDs as chemical nanosensor was tested. Currently clean water resources are foremost among global challenges facing society today. A significant proportion of the worlds wastewater containing heavy metals as pollutants is disposed untreated in to the environment11. Therefore, the application of the prepared N-CQDs as chemical sensor to detect chromium (VI) which is carcinogenic, hemotoxic, and genotoxic; the main source being industrial waste water, would be timely.
Glucose, ammonia (32%), potassium chromate, and potassium dichromate were purchased from Fisher Scientific. The solutions of metal ions used for the sensing application experiments were prepared using nitrate (Ag+, Ce3+, Co2+, Cr3+, Ni2+, Fe3+), sulphate (Cu2+ and Fe2+), chloride (K+, Na+, Mg2+) and sodium (CrO42−, Cr2O72−, NO3−, CH3COO−, HCOO−, SO42−, F−, Cl−, Br−, I−). These chemicals were purchased from Sigma-Aldrich and were used as received. 15 MΩ deionized H2O (ELGA Purelab) was used for all the experiments.
UV–Vis spectrophotometry: Shimadzu UV-1800 was used to perform the absorption measurements (λ = 200 to 800 nm) using a quartz cuvette (10 mm).
Photoluminescence (PL) spectroscopy: The steady-state fluorescence spectra of NCQDs were measured with Shimadzu RF-6000 spectrofluorophotometer.
High-resolution transmission electron microscopy (HRTEM): NCQDs were diluted in isopropanol and applied onto a carbon holey mesh grid (Agar) and allowed to air dry. The samples were then imaged using JEM 2100 (Jeol, Japan) at an acceleration voltage of 200 kV and at a range of magnifications between 15 and 500 K. Representatives NCQDs samples (g-CQDs, N-0.25, N-1, N-5 and N-10) were imaged and analysed.
Fourier–transform infrared (FTIR) spectra was recorded using an IR Affinity-1S Fourier transform infrared spectrometer instrument.
Raman spectra of the prepared N-CQDs was measured with a Horiba LabRAM HR Evolution spectrometer with radiation at 633 nm.
An Edinburgh Instruments FLS1000 photoluminescence spectrometer was used to measure the PL lifetime and the PLQY of the samples. The lifetime was measured using a 375 nm pulsed laser, and the data was fitted with 3-exponentials after reconvolution with instrument response function. The absolute quantum yield (QYabs) of the samples were investigated by using integrating sphere accessory with a standard method. Then, the true fluorescence quantum yield (QYtrue) is calculated by using the Eq. (1) where a is the fraction of the re-absorbed area.
X-ray photoelectron spectroscopy (XPS): an AXIS Ultra DLD (Kratos Surface Analysis) setup equipped with a 180° hemispherical analyser, using Al Kα1 (1486.74 eV) radiation produced by a mono-chromicized X-ray source at operating power of 300 W (15 kV × 20 mA), with spot size of 0.7 mm was used to record the XPS spectra. The partial charge compensation was achieved by using a flood gun operating at 1.52 A filament current, 2.73 V charge balance, and 1.02 V filament bias. The vacuum in the analysis chamber was at least 1 × 10−8 mbar.
Continuous hydrothermal flow synthesis (CHFS) was employed to synthesize N-CQDs (Supplementary Fig. 1). The process consists of three feedstock streams; (i) glucose (with a concentration of 70 mg mL−1) which was used as carbon source, (ii) ammonia with varied concentrations from 0.25 M up to 10.0 M was used as an N-dopant, and (iii) supercritical water which is the key parameter of this reaction. Firstly, the deionized water (with the flow rate 20 mL min−1) was heated up to 450 °C, and the pressure was kept at 24.8 MPa by using a back-pressure regulator (BPR) during the experiment (this condition is previously reported by our group as the optimised environment for the CQDs synthesis)10. The reaction was conducted by injecting the two precursors into the engineered mixer labelled as the “Reactor” (Fig. S1). Here, the precursors were mixed with supercritical water, and the nano dots were produced (in fraction of seconds). The residence time (~ 1.8 s) of the reaction was controlled by the flow rate of the precursors; both glucose and ammonia were pumped at the same time into the reactor with 5 mL min−1 flow rate. The reaction mixture travelled through a cooler to the BPR, and was collected for further treatments. The obtained solutions from the CHFS reaction mixtures were filtered using a 0.2 µm alumina membrane; subsequently, the solutions were continuously dialysed using a 30 kDa membrane in a tangential filtration unit. The cleaned solutions were freeze-dried, and the obtained average yield was 10.68 mg ml−1.
The CQDs samples synthesised from [glucose] = 70 mg mL−1 and ammonia with varied concentrations (0.25 M, 0.5 M, 0. 75 M, 1.0 M, 2.5 M, 5.0 M, 7.5 M and 10.0 M), were denoted as N-0.25, N-0.5, N-0.75, N-1, N-2.5, N-5, N-7.5, and N-10, respectively; g-CQDs was synthesised from the same source (glucose) but without nitrogen doping. The fluorescent photographs of the samples are shown in Fig. S2.
pH stability testing
The solutions with pH ranging from 1 to 13 were prepared using varying concentrations of NaOH (initial concentration 1.0 M) and HCl (1.0 M) solutions which were then diluted to the required pH. A dilute solution of N-CQDs (optical density, OD = 0.1) in deionised water was prepared. Following that, 100 μL of this diluted N-CQDs solution was added to a 3000 μL of each solution prepared at thedesired pH level (range 1–13). A pH meter was used to measure the corresponding pH values. The fluorescence spectra of these solutions were recorded using a Shimadzu RF-6000 Spectro fluorophotometer.
Chromium (VI) ion-sensing experiments
The detection of Cr (VI) ion experiment was conducted with various metal ions (as reported above), each prepared with a concentration of 50 ppm. In a typical experiment, 100 μL, N-CQDs (0.1 OD) was added to the 3.0 mL aqueous metal ion solution. The fluorescence spectrum of mixture solutions was measured using a Shimadzu RF-6000 Spectro fluorophotometermeas. The fluorescence lifetime was investigated to achieve a deeper understanding of the quenching mechanism using an Edinburgh Instruments FLS1000 spectrophotometer.
Limits of detection (LOD) and limits of quantification (LOQ)
The sensitivity of the N-CQDs sensor for Cr (VI) was investigated by evaluating their LOD and LOQ. For that, Cr (VI) ion solutions with various concentrations of 300 ppm, 200 ppm, 100 ppm, 50 ppm, 30 ppm, 10 ppm, 5 ppm, 2 ppm, 1 ppm and 0.5 ppm were first prepared. Then, 100 μL of N-CQDs were added to 3.0 mL of the prepared Cr (VI) ion solutions. The fluorescence spectra were recorded to estimate the LOD and LOQ by using Stem-Volmer graphs, LOD = 3σ/Ksv, LOQ = 10σ/Ksv, where Ksv is the slope of the graph, and σ is the error of the intercept.
Results and discussion
HRTEM images of N-CQDs (representative N-0.25 sample) show that the as-synthesized N-CQDs are spherical (Fig. 1a,c) with particle size ranging from 1.78 to 6.50 nm. The Gaussian distribution (Fig. 1b) of a sample of 150 particles shows the mean particle size of 4.60 ± 0.87 nm. In addition, the N-CQDs possess a crystalline structure as indicated by graphite lattice d-spacing of 0.22 nm (Fig. 1d). Similar features were also observed for the other samples g-CQDs, N-1, N-5 and N-10 analysed via TEM (Fig. S3).
To determine the nature of the functionalisation, the synthesised N-CQDs were investigated using the Fourier transform infrared (FTIR) spectroscopy. The samples were classified into two groups:(i) N-CQDs with a lower concentration of ammonia (from N-0.25 to N-1) and (ii) with a higher concentration of ammonia (from N-2.5 to N-10). The FTIR spectra (Fig. 2) showed that all N-CQDs have hydrophilic groups on their surface such as O–H (hydroxyl) corresponding to the peak at 3389 cm−1 and N–H (3263 cm−1), thus confirming their good solubility in the water. In addition, vibrations of C–H (2950 cm−1), C=O (1581 cm−1), C–N (1435 cm−1) and C–O (1080 cm−1) bonds were also observed in each sample13,14,15. The comparison of the FTIR spectra (Fig. S4) of the samples showed that increasing N-doping (ammonia concentration, from N-0.25 to N-1) displayed a diminishing stretch in vibration for C–O bond at 1080 cm−1. While the group of samples with a higher concentration of ammonia (N-2.5 to N-10) showed a sharp vibration of C–N bond at 1435 cm−1.
To achieve a deeper understanding of the surface characterisation of the N-CQDs and also to investigate the chemical composition of N-CQDs, X-ray photoelectron spectroscopy (XPS) was employed. The resultant XPS spectra shown in Fig. 3 were deconvoluted using Voigt functions (Lorentzian and Gaussian widths) with a distinct inelastic background for each component16. A minimum number of components is used to obtain a convenient fit. The binding energy scale was calibrated to the C 1s standard value of 284.6 eV. The atomic composition has been determined by using the integral areas provided by the deconvolution procedure normalized at the atomic sensitivity factor (Table S1). The XPS spectrum of the N-CQDs displays three typical peaks C1s (285.0 eV), N1s (399.0 eV), and O1s (531.0 eV). The fitted C1s spectrum was deconvoluted into four components, corresponding to carbon in form of C=C/C–C bonds (~ 284.4 eV), C–O/C–N (~ 285.8 eV), C=O (~ 287.3) and O=C–OH (~ 288.4 eV)17. Whilst, the N1s band showed three peaks after deconvolution which are 398.8 eV, 399.6 eV and 400.8 eV, representing pyridinic N, N–H and amide C–N, respectively18.
The content of each nitrogen doping species (pyridinic, pyrrolic and graphitic) are identified and quantified from the XPS spectra of NCQDs with the purpose of understanding their influence over the optical and chemical properties (Table S2). As commonly reported, the fluorescent property of CQDs can be enhanced by using nitrogen-doping. However, only the nitrogen bonded to carbon can improve the emission19. Also, a larger ratio of N/C was observed for N-CQDs samples synthesised with a higher concentration of ammonia (Table S1). The O1s region contains three peaks at 530.9 eV, 532.2 eV and 533.3 eV for C–OH/C–O–C, C=O, H–O–H, respectively20. In addition, the oxygen content is also a key parameter in the N-CQDs emission as it can maintain the balance between sp2 and sp3-carbon atoms21. Therefore, Raman spectroscopy was employed to investigate the disorder in the carbon bonding arrangement of N-CQDs.
The Raman spectra (Fig. S5) of the N-CQDs exhibited typical graphitic features consisting of the D mode (at 1368 cm−1) related to symmetry transformation by the defects, and the G band (at 1586 cm−1), which is assigned to the graphitic core sp2 (graphite-like) bonds. This is not surprising as the HRTEM images of N-CQDs showed a typical lattice spacing of graphite (see Fig. 1d). When comparing the Raman spectra between N-CQDs, at first glance, those spectra look similar, a common ratio ID/IG of 0.95 revealed a balance between the sp2 and sp3 bonds in the N-CQDs structure. This is different from g-CQDs where an ID/IG ratio of 0.83 was observed and assigned to the carbon core (sp2 bonds)10. This is probably attributed to the changes introduced by the nitrogen doping resulting in the transformation of C–C (sp2 bonds) into the sp3 bonding between N, O and C.
Optical properties of N-CQDs
The absorption spectra of the as-prepared N-CQDs measured using UV–Vis spectrophotometry are shown in Fig. 4. The N-CQDs samples have a strong peak around 265 nm and a shoulder around 295 nm (Fig. 4a). The 265 nm absorption peak is characteristics of π–π* transitions of the graphitic core (C=C or C–C) of sp2 domains present in the sp3 environment, and the 295 nm is attributed to n–π* (C=O) transitions and C–N/C=N bonds22,23. For comparison, the absorption spectrum of CQDs without nitrogen doping was also measured. It is noted that the absorption peaks related to N-CQDs are red-shifted compared to g-CQDs (synthesised from the same source glucose but without nitrogen doping), Fig. 4b. These transitions, are observed at 225 nm = π–π* (graphitic core), and 280 nm = n–π* transitions (C=O)10. Therefore, the absorption peak observed at 295 nm in the case of the N-CQDs is due to the formation of the C–N/C=N bonds related to the doping effect caused by the presence of graphitic nitrogen24,25.
The photoluminescence (PL) spectra of as-prepared N-CQDs were measured using a range of different excitation wavelengths as shown in Fig. 5. The PL emission of each sample clearly showed the excitation-dependent PL which is beneficial for a variety of applications such as biosensors, bio-images, or LED devices26,27. The PL emission peaks shifted when different excitation wavelengths were applied, and each sample exhibited an optimal excitation wavelength. Overall, the PL study revealed interesting optical properties of the N-CQDs. Firstly, the PL results are consistent with previous reports where the excitation dependent emission phenomenon of CQDs was observed28. Secondly, the maximum excitation wavelengths varied from 360 to 320 nm with the concentration of ammonia.
However, the mechanisms behind the excitation dependent properties of CQDs is not clear. One of the most comprehensive and broadly accepted mechanism in interpreting the excitation-dependent PL of the CQDs is the quantum confinement effect also known as the size effect14,21,28,29. In general, the CQDs possess broad particle size distributions which leads to a range of different energy gaps and is the reason for the variation of emission wavelengths30,31. But herein, HRTEM image data analyses confirmed that the increased amount of nitrogen doping did not contribute to an increase in the particle size for the as prepared samples. Therefore, the observed red-shift character can be ascribed to the radiative recombination of e−h pairs hosted in the sp2 clusters32. Aside from the quantum confinement effect, surface states theory is rather broadly adopted to interpret the excitation-dependent PL behaviour of CQDs33,34,35. UV–Vis absorbance showed that the peak of the N-CQDs at 265 nm is related to the π–π* transition, which suggests the existence of a large number of π-electrons. The surface electronic states can conjugate with these π-electrons as a results of the surface oxidation which result in the modification of the electronic structure of the N-CQDs34,36.
To interpret the mechanism of this effect, the PL lifetime and PLQY of N-CQDs were measured. The obtained results (Table 1) showed an increase in both PL lifetime and PLQY upon nitrogen doping and the highest values of lifetime and PLQY were obtained for (left[Nright]ge 7.5 M). The obtained PLQY value of 9.6%(pm) 0.9 for N-10 is an significant improvement compared to g-CQDs which showed PLQY of < 1%10. These results are comparable to the literature (shown in Table S3), where CQDs and N-CQDs were synthesised via different methodologies.
where (Phi) is PLQY of N-CQD and (tau _1/e) corresponds to the lifetime when fluorescence drops 1/e of its initial value.
Table 1 and Fig. 6 shows that when a higher concentration of ammonia was used, the non-radiative rates significantly reduced. This is due to surface coating activities of the nitrogen functional group which led to enhanced PLQY38. In addition, the lower non-radiative constant suggested that N-CQDs possess an efficient recombination process which led to an observation of nanosecond scale PL lifetime. These recombination processes suggested strong coupling of excited core states with the surface state. Thus confirming that the π-electron systems affect the surface electronic state leading to the modification of the overall electronic structure of N-CQDs36,39.
The stability of CQDs for a broad pH range is essential for sensing applications. Therefore, PL of the N-CQDs in pH solution was measured to establish the relationship between the pH level and the emission intensity. N-CQDs showed fluorescence stability in a broad pH range from 2 to 11. For example, the fluorescence intensity of sample N-0.25 (Fig. 7a,b) was dramatically reduced by ~ 60% at pH 1, ~ 35% at pH 12 and ~ 40% at pH 13; and there were slight decreases at pH 11 (~ 12%).
The diminishing fluorescent intensity behaviour of N-CQDs in strongly acidic and alkali media, noticeable also for samples with high content of nitrogen (for example, sample N-10, Fig. 7c,d). This can be related to protonation/deprotonation of the surface functional groups which causes surface charge disruption13. Whilst H-bonding is eliminated by deprotonation in basic conditions which can cause irregular energy levels resulting in the reduction of the N-CQDs fluorescence40. In addition, H+ can introduce surface defects on CQDs by breaking the passivated OH– shell resulting in PL decreasing and a redshifted spectrum41. Indeed, as shown in Fig. 7c, a 20 nm red shift was also noted for the N-CQD in the strong acidic condition (pH 1). We have previously assigned this to the prominent emissions deriving from the graphitic core10.
Chromium (VI) ion-sensing
The N-CQDs were investigated for ion-sensing applications for a series of cations and anions (Fig. 8 and Fig. S6) including chromium (VI) ion (CrO42−/Cr2O72−) which is a major anthropogenic pollutant in industrial wastewater and soils11. The obtained results indicated that N-CQDs are highly sensitive and showed high selectivity towards hexavalent chromium in comparison to a series of other cations and anions.
To identify the sensitivity, limits of detection (LOD) and limits of quantification (LOQ) of N-CQDs were determined by measuring the fluorescent emission quenching as a function of Cr (VI) concentration (Fig. 9a). The reduction in emission intensity with the increase in concentrations of chromium was observed (Fig. 9b) i.e. there is a correlation between them. This correlation was fitted with a linear equation (y=mx+c), where slope m gives the value of quenching constant Ksv and c is the intercept. The LOD and LOQ were determined using the following equations: LOD = 3σ/Ksv, LOQ = 10σ/Ks, respectively, where σ is the standard error of the intercept. The plotted Stern–Volmer graph for N-10 in Fig. 9b was fitted with y = 0.0238x + 0.026 (R2 = 0.9999) which gave a quenching constant value of Ksv = 0. 0238 and intercept of 0.026. The LOD of 0.955 ppm and LOQ of 3.182 ppm were obtained with a standard error of the intercept of 0.0076. The calculated LOD and LOQ for all the samples are shown in Fig. 10. Our results showed a significant improvement compared to previously reported research10 where LOD of 3.62 ppm and LOQ of 11.6 ppm) were reported for g-CQDs. In addition, the obtained results are also comparable to other reported literature (Table S4).
To understand the quenching mechanism, the change in PL lifetime of N-CQDs in various ion solutions were studied. The mechanism for this fluorescence quenching behaviour in the presence of Cr (VI) can be assigned to Inner Filter Effect (IFE) which is a physical phenomenon that occurs in a sensing system when the absorption spectrum of the absorber has an overlap with that of excitation and/or emission of the fluorescence leading to the reduced fluorescent emission intensity13. The IFE quenching is not related to the radiative and non-radiative transitions in the CQD, thus the intrinsic fluorescence emission is not changed in the presence of the quencher molecule42.
As illustrated in Fig. 10a, the N-CQDs excitation and emission bands (λex = 340 nm and λem = 420 nm) overlapped with the chromate (CrO42−) anions absorption bands at 372 nm. Moreover, CrO42− shows a second absorption band at 274 nm that also overlaps with the N-CQDs most intense absorption band at 265 nm. Further, the fluorescence lifetime of N-CQDs did not change with the addition of Cr (VI) (Fig. S7 and Table S5) which provides an evidence that the IFE is the mechanism for the fluorescence quenching phenomenon. There is a downtrend of N-CQDs in the LOD and LOQ results (Fig. 10b). The LOD decreased from 4.9 ppm for sample N-0.25 to 0.95 ppm for sample N-10, while the LOQ improved from 16.3 ppm to 3.18 ppm. These results revealed that nitrogen doping enhanced the fluorescent properties of CQDs resulting in higher PLQY which then leads to the improvement in LOD and LOQ and consequently the sensing performances.
In conclusion, efficient N-CDQs were synthesised using biomass precursors (glucose) and ammonia via the CHFS process. The synthesized N-CQDs possess excellent optical properties with a PLQY of ~ 10% and showed excellent pH stability (for pH 2 to 11). The synthesized N-CQDs were tested as a chemical sensor for Cr (VI) ion and the LOD value of 0.95 ppm and LOQ value of 3.18 nm were obtained. The fluorescence lifetime studies confirmed Inner Filter Effect (IFE) as the mechanism for the quenching behaviour of the nano-sensing. Hence, this work presented a novel, rapid, single-step and green approach for nanomaterials synthesis in general and carbon quantum dots in particular which then can be used for a range of different applications.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. For the purpose of open access, the authors have applied a Creative Commons attribution (CC BY) licence to any author accepted manuscript version arising.
Wang, X., Feng, Y., Dong, P. & Huang, J. A Mini review on carbon quantum dots: Preparation, properties, and electrocatalytic application. Front. Chem. 7, 1–9 (2019).
Georgakilas, V., Perman, J. A., Tucek, J. & Zboril, R. Broad family of carbon nanoallotropes: Classification, chemistry, and applications of fullerenes, carbon dots, nanotubes, graphene, nanodiamonds, and combined superstructures. Chem. Rev. 115(11), 4744–4822 (2015).
Jing, S., Zhao, Y., Sun, R. C., Zhong, L. & Peng, X. Facile and high-yield synthesis of carbon quantum dots from biomass-derived carbons at mild condition. ACS Sustain. Chem. Eng. 7(8), 7833–7843 (2019).
Zhang, S. et al. Sustainable production of value-added carbon nanomaterials from biomass pyrolysis. Nat. Sustain. 3(9), 753–760 (2020).
Kang, C., Huang, Y., Yang, H., Yan, X. F. & Chen, Z. P. A review of carbon dots produced from biomass wastes. Nanomaterials 10(11), 1–24 (2020).
Dsouza, S. D. et al. The importance of surface states in N-doped carbon quantum dots. Carbon 183, 1–11 (2021).
Kou, X., Jiang, S., Park, S. J. & Meng, L. Y. A review: Recent advances in preparations and applications of heteroatom-doped carbon quantum dots. Dalton Trans. 49, 6915–6938 (2020).
Mintz, K. J., Zhou, Y. & Leblanc, R. M. Recent development of carbon quantum dots regarding their optical properties, photoluminescence mechanism, and core structure. Nanoscale 11(11), 4634–4652 (2019).
Kellici, S. et al. Rapid synthesis of graphene quantum dots using a continuous hydrothermal flow synthesis approach. RSC Adv. 7(24), 14716–14720 (2017).
Baragau, I. A. et al. Efficient continuous hydrothermal flow synthesis of carbon quantum dots from a targeted biomass precursor for on-off metal ions nanosensing. ACS Sustain. Chem. Eng. 9(6), 2559–2569 (2021).
Azimi, A., Azari, A., Rezakazemi, M. & Ansarpour, M. Removal of heavy metals from industrial wastewaters: A review. ChemBioEng. Rev. 4(1), 37–59 (2017).
Wu, P., Li, W., Wu, Q., Liu, Y. & Liu, S. Hydrothermal synthesis of nitrogen-doped carbon quantum dots from microcrystalline cellulose for the detection of Fe3+ ions in an acidic environment. RSC Adv. 7(70), 44144–44153 (2017).
Baragau, I.-A. et al. Continuous hydrothermal flow synthesis of blue-luminescent, excitation independent nitrogen-doped carbon quantum dots as nanosensors. J. Mater. Chem. A 8(6), 3270–3279 (2020).
Ding, H., Yu, S. B., Wei, J. S. & Xiong, H. M. Full-color light-emitting carbon dots with a surface-state-controlled luminescence mechanism. ACS Nano 10(1), 484–491 (2016).
Atchudan, R., Edison, T. N. J. I., Perumal, S., ClamentSagaya Selvam, N. & Lee, Y. R. Green synthesized multiple fluorescent nitrogen-doped carbon quantum dots as an efficient label-free optical nanoprobe for in vivo live-cell imaging. J. Photochem. Photobiol. A Chem. 372, 99–107 (2019).
Major, G. H. et al. Practical guide for curve fitting in X-ray photoelectron spectroscopy. J. Vac. Sci. Technol. A. 38(6), 061203 (2020).
Ji, C., Zhou, Y., Leblanc, R. M. & Peng, Z. Recent developments of carbon dots in biosensing: A review. ACS Sensors. 5(9), 2724–2741 (2020).
Osadchii, D. Y., Olivos-Suarez, A. I., Bavykina, A. V. & Gascon, J. Revisiting nitrogen species in covalent triazine frameworks. Langmuir 33(50), 14278–14285 (2017).
Bhattacharyya, S. et al. Effect of nitrogen atom positioning on the trade-off between emissive and photocatalytic properties of carbon dots. Nat. Commun. 8(1), 1–9 (2017).
Lesiak, B. et al. C sp2/sp3 hybridisations in carbon nanomaterials—XPS and (X)AES study. Appl. Surf. Sci. 452, 223–231 (2018).
Manioudakis, J. et al. Effects of nitrogen-doping on the photophysical properties of carbon dots. J. Mater. Chem. C. 7(4), 853–862 (2019).
Carbonaro, C. et al. On the emission properties of carbon dots: Reviewing data and discussing models. J. Carbon Res. 5(4), 60 (2019).
Kellici, S. et al. Continuous hydrothermal flow synthesis of graphene quantum dots. React. Chem. Eng. 3(6), 949–958 (2018).
Sarkar, S. et al. Graphitic nitrogen doping in carbon dots causes red-shifted absorption. J. Phys. Chem. C. 120(2), 1303–1308 (2016).
Strauss, V. et al. Carbon nanodots: Toward a comprehensive understanding of their photoluminescence. J. Am. Chem. Soc. 136(49), 17308–17316 (2014).
Joshi, P., Mishra, R. & Narayan, R. J. Biosensing applications of carbon-based materials. Curr. Opin. Biomed. Eng. 18, 100274 (2021).
Roy, P., Chen, P. C., Periasamy, A. P., Chen, Y. N. & Chang, H. T. Photoluminescent carbon nanodots: Synthesis, physicochemical properties and analytical applications. Mater. Today. 18(8), 447–458 (2015).
Gan, Z., Xu, H. & Hao, Y. Mechanism for excitation-dependent photoluminescence from graphene quantum dots and other graphene oxide derivates: Consensus, debates and challenges. Nanoscale 8(15), 7794–7807 (2016).
Liu, W. et al. Graphene quantum dots-based advanced electrode materials : Design, synthesis and their applications in electrochemical energy storage and electrocatalysis. Adv. Energy Mater. 2001275, 1–49 (2020).
Sun, Y. P. et al. Quantum-sized carbon dots for bright and colorful photoluminescence. J. Am. Chem. Soc. 128(24), 7756–7757 (2006).
Wang, W. et al. Shedding light on the effective fluorophore structure of high fluorescence quantum yield carbon nanodots. RSC Adv. 7(40), 24771–24780 (2017).
Mei, Q. et al. Highly efficient photoluminescent graphene oxide with tunable surface properties. Chem. Commun. 46(39), 7319–7321 (2010).
Dong, Y. et al. Carbon-based dots co-doped with nitrogen and sulfur for high quantum yield and excitation-independent emission. Angew Chem. Int. Ed. 52(30), 7800–7804 (2013).
Bao, L., Liu, C., Zhang, Z. L. & Pang, D. W. Photoluminescence-tunable carbon nanodots: Surface-state energy-gap tuning. Adv. Mater. 27(10), 1663–1667 (2015).
Bao, L. et al. Electrochemical tuning of luminescent carbon nanodots: From preparation to luminescence mechanism. Adv. Mater. 23(48), 5801–5806 (2011).
English, D. S., Pell, L. E., Yu, Z., Barbara, P. F. & Korgel, B. A. Size tunable visible luminescence from individual organic monolayer stabilized silicon nanocrystal quantum dots. Nano Lett. 2(7), 681–685 (2002).
De Laurentis, M. & Irace, A. Optical measurement techniques of recombination lifetime based on the free carriers absorption effect. J. Solid State Phys. 6, 1–19 (2014).
Ghosh, S. et al. Photoluminescence of carbon nanodots : Dipole emission centers. Nano Lett. 14, 5656–5661 (2014).
Omary, M.A., & Patterson, H.H. Luminescence, theory. in Encyclopaedia of Spectroscopy and Spectrometry. 3rd ed. 636–653. (Elsevier Ltd., 2016).
Wang, L. et al. Rationally designed efficient dual-mode colorimetric/fluorescence sensor based on carbon dots for detection of pH and Cu2+ ions. ACS Sustain. Chem. Eng. 6(10), 12668–12674 (2018).
Kong, W. et al. Optical properties of pH-sensitive carbon-dots with different modifications. J. Lumin. 148, 238–242 (2014).
Song, Y. et al. Investigation into the fluorescence quenching behaviors and applications of carbon dots. Nanoscale 6(9), 4676–4682 (2014).
SK, MTS, KGN and IAB would like to acknowledge LSBU for all the financial support provided in the completion of this research work. IAB and AN would like to acknowledge funding through the Core Program 21N/2019 and the Project ELI_17/2020 (granted by the Institute of Atomic Physics) of the National Institute of Materials Physics.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Nguyen, K.G., Baragau, IA., Gromicova, R. et al. Investigating the effect of N-doping on carbon quantum dots structure, optical properties and metal ion screening.
Sci Rep 12, 13806 (2022). https://doi.org/10.1038/s41598-022-16893-x
Received: 11 April 2022
Accepted: 18 July 2022
Published: 15 August 2022
Real estate as a wealth creator – Ottawa Business Journal
Canadians favour metric measurements, but use imperial: poll – CTV News
Canada finishes preliminary round undefeated, top of group with win over Finland at world juniors – CBC Sports
Silver investment demand jumped 12% in 2019
Europe kicks off vaccination programs | All media content | DW | 27.12.2020 – Deutsche Welle
Global Media Markets, 2015-2020, 2020-2025F, 2030F – TV and Radio Broadcasting, Film and Music, Information Services, Web Content, Search Portals And Social Media, Print Media, & Cable – GlobeNewswire
News20 hours ago
William Ruto declared new Kenyan President
Economy20 hours ago
The US economy didn't get the recession memo – CNN
News18 hours ago
Quebec allows copper smelter in northwest to emit arsenic levels five times norm
Economy18 hours ago
Japan’s economy rebounds from COVID, growing 2.2 percent in Q2 – Al Jazeera English
Health21 hours ago
Quebec starts offering fifth dose of COVID-19 vaccine to long-term care residents
Sports12 hours ago
Blue Jays activate Springer from IL, designate Zimmer for assignment – Sportsnet.ca
Art20 hours ago
How Ukrainian art has fought Russian imperialism – Ukraine Crisis Media Center
Business19 hours ago
The Next Big Thing In Entrepreneurship |
Blue Whales normally travel as individuals or in small groups of two or three animals, but in areas where krill are plentiful they can be found feeding in groups of 50 or more.
Blue Whales existed in large numbers until the early twentieth century. In a period of less than 40 years they were hunted nearly to extinction. As of 2002 there were 5 to 10 thousand worldwide and their numbers appear to be increasing.
A Baleen whale, the blue whales is believed to be the largest animal to have ever lived.
They can reach lengths of over 90 feet and attain weights of 150 tons or more.
They feed almost exclusively on huge amounts of tiny shrimp like crustaceans called krill.
The huge fluke of a Blue Whale can power it to speeds of over 30 miles per hour for short distances.
Range: Found worldwide. In the Pacific Ocean they range from the Arctic to the Gulf of Mexico
Species: Balaenoptera musculus
- Lori Morse
- Image Size
- 3841x2610 / 6.5MB
- Contained in galleries |
Do you want to be TEFL or TESOL-certified and teach in JiAngji Zhen? Are you interested in teaching English in Bozhou Shi? Check out ITTT’s online and in-class courses, Become certified to Teach English as a Foreign Language and start teaching English ONLINE or abroad! ITTT offers a wide variety of Online TEFL Courses and a great number of opportunities for English Teachers and for Teachers of English as a Second Language.
In this unit I learnt about the past tense and its four aspects: past simple, past continuous, past perfect and past perfect continuous. Each of these times has differet forms and usages, although sometimes the latter can overlap from one tense to another. It was very useful also to read about the most common mistakes that students tend to do when using the tenses and about some teaching ideas for the activate stage.Reported speech is very useful and very important to comprehend. A good lesson plan must include several strategies to help students practice the reported speech such as: intermediaries and media interviews. The guide of tenses from direct speech to reported speech is very helpful and it is a great idea to share it with students so they can practice and remember the different changes of tenses. It was a great lesson. |
Today, everybody knows that farming is the most important activity that humans need to do in order to survive. However, that was not true 12,000 years ago. Humans then were hunters and gatherers. They hunted the animals and gathered plants. Like nomads, they moved from one place to another when food supply was diminished. They could not settle in one place and lived in small groups.
Farming is certainly not very easy. It requires hard work, lots of patience and waiting, and problems of storing. So the question is that what made the humans think of farming? The last ice age ended around 10,000 years ago. The warmer weather after the ice age let people live for longer and contributed to rise in their population. The head of the families had more mouths to feed, and that made them learn to farm. The crops from farming were more reliable and could feed more people than hunting or gathering could feed. The food supply became a little bit more predictable. This also led to permanent settlements instead of small tribes that moved frequently.
How did they come up with the techniques of farming? Our ancestors had great observational skills. They must have observed that if they put the seed in the ground the seeds grew and made the same plant that they ate. They realized the importance of water in farming, and that the soil near river banks produces healthier crops. Hence, most of the early settlements were near water sources. After a series of trial and error, they perfected the art of farming.
Then, to protect their farm, they had to stay near it. They invented tools – for pounding the grains – and built houses for people to live in. That was the beginning of a settled and organised life.
Our ancestors then had to protect their land. That gave them the concept of ownership – owning land areas and dividing up into states and countries. We could probably say that History has evolved the way it has – because of one activity – farming. |
Part of the series Go Facts - How is it Made?.
Go Facts sets deliver the clear, exciting and easy-to-read nonfiction your students need. As they move up through the reading levels, your students are guided from their own experiences to the wider world around them. Each Go Facts title is a model of coherent, integrated topic development, with concepts and ideas supported by fact boxes, photographs and illustrations. Go Facts introduces the structures and conventions of nonfiction writing, presenting models that are explored via activities in the Teaching Guides (sold separately).
View the Series List
Go Facts - How is it Made? - Clean Water
We use water every day—from washing our clothes to taking a bath. Find out how water goes from being rain water to clean, drinking water.
- Subject: Non-Fiction, SOSE/HSIE
- School Level*: Lower Primary
- Reading Level: 21
- Reading Age*: 6.5+ yrs
- Word Count: 300-350
- Broadband: D
- Lexile: 610L
*Please note: School Level and Reading Age are a guide only.
View Sample Pages
Click on the images below the product image (at the top of the page) to view sample pages.
About the Series
Go Facts - How is it Made? looks at the processes used to create products which we use every day. The books explore the processes used to create bread, fuel, clean water and TV shows. The set introduces readers to various nonfiction text types, including explanations and procedures, and also features photographs, illustrated diagrams, flow charts, indexes and glossaries.
Also Available in the 'Go Facts - How is it Made?' Series
Go Facts - How is it Made? - Teaching Guide
Each set of four Go Facts books with its Teaching Guide provides an essential resource for extending students’ reading and writing across the curriculum. Each set contains written examples of all the nonfiction text types complemented by a wide variety of information presented visually.
View Go Facts - How is it Made? - Teaching Guide
Go Facts - How is it Made? - DVD Video
The Go Facts - How is it Made? - DVD Video translates the process covered in the Go Facts - How is it Made? Books - Fuel, Bread, Clean Water and TV Show - to the screen.
View Go Facts - How is it Made? - DVD Video
Go Facts - How is it Made? Set
Titles in the Go Facts - How is it Made? Set
- Go Facts - How is it Made? - TV Shows
- Go Facts - How is it Made? - Bread
- Go Facts - How is it Made? - Clean Water
- Go Facts - How is it Made? - Fuel
|Product Type||Readers & Literacy,|
|Year Level||Year 2,|
Be The First To Review This Product!
Help other Teacher Superstore users shop smarter by writing reviews for products you have purchased. |
Many eLearning practitioners would agree that a successful digital learning experience relies on student engagement. However, facilitating engagement online can be challenging – particularly when it comes to sustaining students’ attention over time, in the absence of traditional face-to-face interaction. Fortunately, recommendations arising from numerous research studies have highlighted ways to address this issue. One such recommendation has been the use of pedagogical agents. Studies indicate that, when interactive pedagogical agents are used as instructional aids in the design of eLearning courses, student engagement increases – resulting in a more effective digital learning experience overall (Bickmore et al., 2011; Cook, 2017; Dinçer and Doğanay, 2015; Lane, 2016; and Martha and Santoso, 2019).
This article will provide an overview of pedagogical agents and their benefits, before making recommendations for how they can be used to increase student engagement in online courses.
What is a pedagogical agent?
A pedagogical agent can be described as an autonomous, computer-generated character that is designed to support students during learning – in particular, by reacting to changes in the environment, communicating with other agents, and engaging with students in the context of an interactive learning environment (Vallverdú and Casacuberta, 2009). The concept was originally borrowed from the fields of computer science, animation and artificial intelligence (AI), and applied to online learning environments for the purpose of simulating the social interaction that occurs in a traditional classroom setting (Phan, 2011). Pedagogical agents are embedded in the instructional application, acting in the background as part of the architecture of the educational system. They provide humanlike assistance, guiding students through the multimedia learning environment and contributing positively toward increasing student engagement – and, by implication, effective learning. Let’s take a brief look at how this is achieved.
Pedagogical agents can guide students’ learning processes by providing them with feedback while educational tasks are being performed. This can take the form of speech and/or nonverbal behaviours.
For example, after an activity has been completed successfully, the pedagogical agent can provide positive verbal and/or non-verbal feedback about the completed task. Alternatively, if the activity has not been completed successfully, the pedagogical agent can guide the student toward helpful material that will aid them in completing the task, as illustrated below. These social affordances have been found to promote a positive affect and aid in the transfer of learning (Kramer and Bente in Lane, 2016).
Furthermore, according to Johnson and Lester (2016), pedagogical agents can also support learning by:
- demonstrating how tasks should be performed;
- helping students to navigate from one point to another;
- providing cues to guide students; and
- eliciting emotions in students by expressing empathy toward them.
In doing so, pedagogical agents have a positive effect on aspects of learning such as motivation, satisfaction, appreciation, self-sufficiency and overall performance (Dinçer and Doğanay, 2015: 332). As such, their presence makes online courses more immersive and engaging, thereby increasing knowledge retention (Gettinger and Ball, 2007; Krause and Coates, 2008; and Kuh, 2009). The guidelines that follow explain how pedagogical agents can be used to increase student engagement in online courses.
Using pedagogical agents in eLearning
When using pedagogical agents in online courses, it is vital to ensure that they can guide, mentor, assess and engage students effectively during the learning process. The following points outline some recommendations for achieving this.
1. Research your target audience and the design of your pedagogical agent
Before creating a pedagogical agent for an online course, it is wise to research different designs, as well as how different target audiences might respond to these designs. Remember that agents often take on the role of virtual instructors (Pappas, 2014). As such, it is important that the chosen design appeals to, and motivates, your specific target audience (Shiban et al., 2015). Be mindful of selecting designs that may ‘steal the spotlight’. The role of a pedagogical agent is to enhance the learning experience, and to guide the student through the processes of knowledge acquisition and retention. As such, the design of your agent should improve the learning experience rather than disrupt it (Pappas, 2014).
2. Decide on the qualities that your pedagogical agent should possess
Based on the research you have conducted, you will need to decide on the qualities that your pedagogical agent should possess. Pedagogical agents should be relatable to your target audience, as this allows students to form a ‘connection’ with them – thereby providing further incentive for engagement with the learning content. Having said this, it may be surprising to learn that these agents don’t need to appear lifelike in order to be effective: cartoonlike agents can be used just as successfully as human-looking agents (Paulose, 2016). However, it is important for the pedagogical agent to display humanlike behaviour. Research indicates that students learn more effectively from agents that exhibit humanlike gestures, movements and facial expressions (Lusk and Atkinson in Paulose, 2016). Furthermore, Mayer (2014) has found that pedagogical agents are more effective when they sound conversational (personalisation principle) and humanlike (voice principle) in comparison to those with formal, machine-like speech patterns.
Adams (2017) and Pappas (2014) recommend that a conversational yet authoritative tone be used when guiding or conversing with students, to come across as helpful and friendly while still maintaining a sense of professionalism. If the tone is too authoritative, students might not find the agent relatable, motivating or interesting. Conversely, if the tone is too friendly, students may perceive the agent to lack credibility.
3. Ensure that the agent guides students through the course effectively
A key function of pedagogical agents is to guide students through the coursework effectively. This can be achieved in a number of ways. For example, the agent can assist students in navigating from one point to another, offer advice and helpful tips for completing tasks, or draw attention to (i.e. signal) important information. This can be achieved by means of non-verbal feedback, such as gestures or facial expressions, or through direct verbal feedback and conversation. This reduces the potential for confusion, while aiding performance and enabling students to attend to, acquire and retain essential information successfully (Johnson and Lester, 2016; O’Dowd et al., 2019; and Pappas, 2014).
4. Assist students in assessing their knowledge
Pedagogical agents can also assist students in assessing their knowledge throughout the course. This can be achieved through both questioning and feedback. For example, the agent may be used to ask the student direct questions about the course content in order to test their understanding. Additionally, they could pose probing questions to prompt the student to explore more complex ideas, thus uncovering the beliefs that inform their thinking (an approach that is referred to as Socratic questioning). Feedback (whether corrective or motivational) can then be provided based on the student’s performance. When used as part of formal assessments, pedagogical agents can also help to make the assessment process itself more interactive and enjoyable (Pappas, 2014).
Research indicates that educational systems aided by interactive pedagogical agents result in more effective eLearning courses (Bickmore et al., 2011). Pedagogical agents increase student engagement and enhance the overall learning experience by guiding, mentoring and interacting with students. In this way, they provide tangible, quantifiable benefits in stimulating learning and, ultimately, aiding academic success (Dinçer and Doğanay, 2015). By following appropriate guidelines and tailoring pedagogical agents to the target audience, educators can create an interactive learning experience in the absence of traditional face-to-face interaction.
Adams, J. M. (2017), ‘For teachers, it’s not just what you say, it’s how you say it’. EdSource [website] <https://edsource.org/2017/for-teachers-its-not-just-what-you-say-its-how-you-say-it/574363> accessed 15 May 2020.
Bickmore, T., Pfeifer, L. and Schulman, D. (2011), ‘Relational Agents Improve Engagement and Learning in Science Museum Visitors’. Intelligent Virtual Agents: 11th International Conference. Reykjavik, Iceland. Conference paper, pp. 15–17.
Cook, C. (2017), ‘Avatars and Instruction: How Pedagogical Agents Can Improve Digital Learning’. Medium [website] <https://medium.com/inspired-ideas-prek-12/avatars-and-instruction-how-pedagogical-agents-can-improve-digital-learning-e7930f3b1e01> accessed 15 May 2020.
Dinçer, S. and Doğanay, A. (2015), ‘The Impact of Pedagogical Agent on Learners’ Motivation and Academic Success’. Practice and Theory in Systems of Education (10)4: 329–348
Gettinger, M. and Ball, C. (2007), ‘Best practices in increasing academic engaged time’. In Thomas, A. and Grimes, J. (Eds.) Best practices in school psychology V. Bethesda, MD: National Association of School Psychologists, pp. 1043–1075.
Higher Education Funding Council for England (2008), Tender for a Study into Student Engagement. Bristol: Higher Education Funding Council for England.
Johnson, W. L. and Lester, J. C. (2016), ‘Face-to-Face Interaction with Pedagogical Agents, Twenty Years Later’. International Artificial Intelligence in Education Society 26: 25–36.
Krause, K. L. and Coates, H. (2008), ‘Students’ engagement in first-year university’. Assessment and Evaluation in Higher Education 33(5): 493–505.
Kuh, G. D. (2009), ‘The National Survey of Student Engagement: Conceptual and Empirical Foundations’. In Umbach, P. D. (Ed.) New Directions For Institutional Research. Hoboken, NJ: Wiley InterScience, pp. 5–20.
Lane, H. C. (2016), ‘Pedagogical Agents and Affect: Molding Positive Learning Interactions’. In Tettegah, S. Y. and Gartmeier, M. (Eds.) Emotions, Technology, Design, and Learning. London: Academic Press, pp. 47–62.
Martha, A. S. D. and Santoso, H. B. (2019), ‘The Design and Impact of the Pedagogical Agent: A Systematic Literature Review’. Journal of Educators Online 16(1).
Mayer, R. E. (Ed.) (2014), The Cambridge Handbook of Multimedia Learning. 2nd edn. New York, NY: Cambridge University Press.
O’Dowd, R., Sauro, S. and Spector‐Cohen, E. (2019), ‘The Role of Pedagogical Mentoring in Virtual Exchange’. TESOL Quarterly (54)1: 146–172.
Pappas, C. (2014), ‘Top 10 Tips on How to Use Avatars in eLearning’. eLearning Industry [website] <https://elearningindustry.com/top-10-tips-use-avatars-in-elearning> accessed 15 May 2020.
Paulose, A. (2016), ‘How to Use Pedagogical Agents in Your eLearning’. Infopro Learning [website] <https://www.infoprolearning.com/blog/how-to-use-pedagogical-agents-elearning/> accessed 26 October 2020.
Person, N. K. and Graesser, A. C. ‘Instructional Design: Pedagogical Agents And Tutors’. Education Encyclopedia [website] <https://education.stateuniversity.com/pages/2095/Instructional-Design-PEDAGOGICAL-AGENTS-TUTORS.html> accessed 15 May 2020.
Phan, H. (2011), ‘A cognitive multimedia environment and its importance: A conceptual model for effective e-learning and development’. International Journal on E-Learning 10(2): 199–221.
Shiban, Y., Schelhorn, I., Jobst, V., Hörnlein, A., Puppe, F., Pauli, P. and Mühlberger, A. (2015), ‘The appearance effect: Influences of virtual agent features on performance and motivation’. Computers in Human Behavior 49: 5–11.
Vallverdú, J. and Casacuberta, D. (2009), Handbook of research on synthetic emotions and sociable robotics: New applications in affective computing and artificial intelligence. Hershey, PA: Information Science Reference. |
This course introduces students to culturally significant films directed by African Americans. Students will come to understand the social and historical context of the films and filmmakers and be able to understand how and why these films are culturally significant to African Americans in particular, but also America in general. Through research, viewings, and discussion, students will gain a better grasp of the complex issues that inform and influence African American cinema. Students will gain an understanding of and be able to discuss African American film culture and history in relation to American culture and history as a whole. Students will learn about significant African American screenwriters, directors, and actors and their relevance to African American history and culture. Students will understand the importance and function of African American films within their social, political, and historical contexts. Students will be able to watch, analyze, and critique African American films with a thorough understanding of the theoretical and cultural contexts within which such critiques should be grounded.
African American culture is integral to American culture, especially in the realm of popular entertainment. African American literature, music, and film have both reflected and influenced American cultural reality for over a century. African Americans’ involvement in the American film industry, as actors, writers, producers and directors, has been simultaneously improving and fraught with difficulties. Working within a unique set of constraints and considerations, African Americans have contributed immensely to the American cultural and cinematic landscape, in both obvious and subtle ways. This course examines those contributions, the people who made them, and the myriad ways they have been helped and hindered by the system within, or around which, they work. African American cinema is a uniquely multifaceted medium that provides a viewpoint from which to experience and understand the African American cultural experience in particular and American culture in general. |
We have observed space rocks for thousands of years. Iron meteorites have been prized throughout history: who can forget Meteoric Iron Dagger of King Tutankhamunor the Buddha carved from a meteorite who fell 15,000 years ago? Similarly, our history of comet sightings is long, many of which have had a significant impact on human history and the development of legends (or should that be an omen?). Halley’s Comet, of course, was immortalized in the famous Bayeux Tapestrymade in the eleventh century.
But what is the difference between an asteroid and a comet? Or a meteorite and a meteorite? The answers to these questions, along with an overview of the different types of space rock, can be found below.
If you’re looking for more stargazing tips, be sure to check out our beginners guide to astronomy and our UK full moon calendar to get the most out of the night sky. For a complete roundup of this year’s meteor showers, we’ve listed them all in our meteor shower calendar.
What is this space rock?
Asteroids: Small rocky objects, often irregular in shape, remnants of the formation of the solar system
Comets: Large icy bodies of frozen gas, dust, and rock, with a frozen core
Meteoroid: Fragments and debris of asteroids and comets
Meteor shower: Several meteoroids burn up in Earth’s atmosphere
Fireball : An exceptionally bright meteor that can be seen over a wide area
Meteorite: Meteoroids that survive the journey through Earth’s atmosphere and fall to the surface
Dwarf planet: A small spherical (or near-spherical) celestial body orbiting the Sun, which does not have enough mass to clear its vicinity of debris
Asteroids are small rocky objects that orbit the Sun. Most asteroids are irregular in shape, although a few are nearly spherical. There are over a million known asteroids, and most are found in the main asteroid belt between Mars and Jupiter. Asteroids are rocky remains left over from the early formation of the solar system.
When an asteroid shares its orbit with a planet, it is called a Trojan asteroid. There are two Trojan asteroids for Earth, however, both are difficult to see because they point Earth in our orbit around the Sun, and therefore appear on the horizon, near the Sun at sunrise. Not ideal viewing conditions.
Asteroid Bennu (shown in the video above by NASA’s Science Visualization Studio) made news in 2020 when NASA’s OSIRIS-REx mission successfully landed on the asteroid’s surface and collected over 400g of samples. It was an impressive achievement, and one that smashed the original 60g target.
Bennu is estimated to be around 4.5 billion years old, forming just after the formation of our solar system. It is most likely a piece of a much larger carbon-rich asteroid, which broke off about 700 million to 2 billion years ago and came closer to Earth every six years. .
But astronomers have to wait a little longer because the spacecraft is still on its way back, due to the passage on September 24, 2023 deliver the precious cargo. After that, the spacecraft will set off again, this time to study the near-Earth asteroid Apophis, under a new mission name. OSIRIS-APEX.
Comets are the snowballs of the cosmos. These icy bodies of frozen gas, rock, and dust orbit the Sun in highly elliptical paths, often falling as they go. When their orbit brings them closer to the Sun, they heat up. This causes solid ice to transform into gas, which is drawn into the distinctive comet tail. Perhaps the most famous comet is Halley’s Comet, which is due to return to our skies in July 2061.
Comets are so varied in size, orbit and composition that it has given rise to many classifications over the years. However, for the purposes of discussion, they can be classified into four broad categories:
- Non-periodic comets: comets that have passed through the solar system only once
- Short-period comets: comets with an orbit of less over 200 years
- Long-period comets: comets with an orbit of After over 200 years
- Lost comets: comets that “disappeared” and were not seen at their most recent perihelion (closest point to the Sun)
There are three main parts in a comet:
- Core: the solid core
- Coma: gases expelled by the nucleus
- Tail: the stream of gas and dust left in the comet’s wake
When comets or asteroids travel around the Sun, they leave a trail of debris in their wake. When Earth’s orbit intersects with this debris, the result is hundreds (or thousands) of light trails, appearing to radiate out from a point in the night sky. This is why meteor showers are often seen at the same time each year, and meteors are more numerous on certain nights.
Thus, a meteor could more accurately be described as a an event, rather than an object: when this tiny particle (called a meteoroid) enters the upper parts of the atmosphere and heats the air around it to incandescence. It is the glow that we see as a meteor.
The Eta Aquariid and Orionid meteor shower are created by Earth’s orbit passing through the trail left by Halley’s Comet, while the impressive Leonids are remnants of Comet Tempel-Tuttle.
Meteoroids are your classic space rocks. As fragments and debris of asteroids and comets, they are among the smallest bodies in the solar system. These particles continue to orbit the Sun in approximately the same orbit as the parent body from which they originated, and over time they move away from the parent as the orbit becomes littered with these particles.
When a meteoroid enters the Earth’s atmosphere and burns up, it leaves a bright trail across the sky and is known as a meteor – or more popularly, a shooting star. Meteoroids can range in size from as small as a speck of dust to small asteroids.
If the collision of a meteoroid with the atmosphere is frontal, it can reach a speed of up to 45 miles per second, while if it hits the atmosphere directly behind the Earth, it can arrive at about 7.5 miles per second. Usually the speed is somewhere in between.
If a meteor entering the atmosphere exceeds Venus in brightness, it is a fireball. A large fireball will be visible for 5-10 seconds, and if you see a fireball that appears to be close, be sure to listen for a rumble or “bang” type sound! A fireball that lasts more than 10 seconds is however more likely to be a satellite or an airplane shape fall back to Earth.
Sometimes a fireball can explode, as was the case with the Tagish Lake Meteorite which fell in 2000. It exploded into the sky with a force about a quarter of that of the Hiroshima bomb, shattering into about 500 pieces that rained down around Tagish Lake in British Columbia, Canada.
It is the meteoroids that survive the journey to the Earth’s surface. So far, over 69,000 meteorites have been found (and named) on Earth.
If you see a meteor streak across the sky with brightness similar to a quarter moon, chances are it will survive the trip and land on Earth. Meteorites fall every day, but finding one is incredibly rare.
There are different types of meteorites, and most can be classified as an iron meteorite, a stony meteorite, or a stony iron meteorite. These categories are broadly defined by the amount of iron-nickel metal contained in the meteorite:
- Iron Meteorite: those that are almost entirely made of metal
- Stony Meteorite: those that consist almost entirely of silicate crystals
- Stony Meteorite: those with similar amounts of metal and silicate crystals
These broad categories are subdivided based on the structure, chemistry, and minerals contained in the meteorite. For example, pallasites are a beautiful type of stony iron meteorite with large translucent olivine green crystals, fully embedded in metal.
Pallasites are rare, and only 61 are known to date. Their exact origin is still hotly debated., but some scientists believe they originate from the core-mantle boundary of the ancient worlds, while other research argues that they originated higher in the mantle. But whatever form they take, pallasites are awesome.
Dwarf planets are massive enough to be affected by gravity and can take on a round or nearly round shape. However, unlike planets, they are unable to clear their orbital path. Each of the known dwarf planets in the solar system is smaller than our Moon. There are five officially recognized dwarf planets in our solar system: Pluto, Ceres, Makemake, Haumea, and Eris.
Although we all know that Pluto should be recognized as a planet…
As indicated by the International Astronomical Unionthe criteria for a dwarf planet are:
- It is in orbit around the Sun
- Round (or nearly round) in shape, having been drawn into that shape by its own gravity
- Not a satellite from another planet
- Is not massive enough to clear its neighborhood of debris
Another potential dwarf planet, RR 2015245, was discovered by the Investigation of the origins of the outer solar system at the Mauna Kea observatories on the Big Island of Hawaii in 2015, and was photographed by Hubble in 2020. It measures approximately 600 km in diameter and is currently under investigation to determine if it has a satellite ( moon) and what the orbit of that satellite might be.
To submit your questions, email us at [email protected] (remember to include your name and location) |
Chemical reactions can result in molecules attaching to each other to form larger molecules, molecules breaking apart to form two or more smaller molecules, or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds, and in some types of reaction may involve production of electrically charged end products. Reactions can occur in various environments: gases, liquids, solids, or combinations of same: for example, at interfaces.
As shown in the adjacent figure for a general case, the participating reactants typically must surmount a threshold energy or activation energy to initiate the reaction, and intermediates exist briefly before the final products are formed. Changes in bonding of a single occurrence of the transformation is not all that is involved in the energy depicted, which is intended to show the evolution of an entire soup of reactants, intermediates and products. This evolution typically involves generation of heat, physical motion and mixing of all the participants, and the chemical reactions. In the figure, the activation energy on the left indicates the change in all these factors needed to surmount the activation threshold and the energy output on the right indicates the change in these factors when the reaction is complete, being the difference between the starting system with only the reactants themselves and the ending system consisting only of the final products. This overall change in energy may be made to do useful work, or simply be dissipated as heat.
Chemical reactions can be either spontaneous and require no input of energy, or non-spontaneous which often require the input of some type of energy such as heat, light or electricity. Classically, chemical reactions are strictly transformations that involve the movement of electrons during the forming and breaking of chemical bonds. A more general concept of a chemical reaction would include nuclear reactions and elementary particle reactions.
Energy changes in reactions
In terms of the energy changes that take place during chemical reactions, a reaction may be either exothermic or endothermic ... terms which were first coined by the French chemist Marcellin Berthelot (1827 − 1907). The meaning of those terms and the difference between them are discussed below and illustrated in the adjacent diagram of the energy profiles for exothermic and endothermic reactions.
Exothermic chemical reactions release energy. The released energy may be in the form of heat, light (for example, flame), electricity (for example, battery discharge), sound and shock waves (for example, explosion) … either singly or in combinations.
A few examples of exothermic reactions are:
Endothermic chemical reactions absorb energy. The energy absorbed may be in various forms just as is the case with exothermic reactions:
A few examples of endothermic reactions are:
- Dissolving ammonium nitrate (NH4NO3) in water (H2O) (absorbs heat and cools the surroundings)
- Electrolysis of water to form hydrogen (H2) and oxygen (O2) gases (absorbs electricity)
- Photosynthesis of chlorophyll plus water plus sunlight to form carbohydrates and oxygen (absorbs light)
- Isomerization, in which a chemical compound undergoes a structural rearrangement without any change in its net atomic composition (see stereoisomerism)
- Direct combination or synthesis, in which 2 or more chemical elements or compounds unite to form a more complex product:
- Chemical decomposition in which a compound is decomposed into elements or smaller compounds:
- Single displacement or substitution, characterized by an element being displaced out of a compound by a more reactive element:
- Metathesis or double displacement, in which two compounds exchange ions or bonds to form different compounds:
- Acid-base reactions, broadly characterized as reactions between an acid and a base, can have different definitions depending on the acid-base concept employed. Some of the most common are:
- Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH− ions.
- Brønsted-Lowry definition: Acids are proton (H+) donors; bases are proton acceptors. Includes the Arrhenius definition.
- Lewis definition: Acids are electron-pair acceptors; bases are electron-pair donors. Includes the Brønsted-Lowry definition.
- Redox reactions, in which changes in the oxidation numbers of atoms in the involved species occur. Those reactions can often be interpreted as transferences of electrons between different molecular sites or species. An example of a redox reaction is:
- 2 S2O32−(aq) + I2(aq) → S4O62−(aq) + 2 I−(aq)
- In which iodine (I2) is reduced to the iodine anion (I−) and the thiosulfate anion (S2O32−) is oxidized to the tetrathionate anion (S4O62−).
- Combustion, a kind of redox reaction in which any combustible substance combines with an oxidizing element, usually oxygen, to generate heat and form oxidized products.
- Disproportionation with one reactant forming two distinct products varying in oxidation state.
- 2 Sn2+ → Sn + Sn4+
- Organic reactions encompass a wide assortment of reactions involving organic compounds which are chemical compounds having carbon as the main element in their molecular structure. The reactions in which an organic compound may take part are largely defined by its functional groups.
- Nikolaus Risch (2002). “Molecules - bonds and reactions”, L. Bergmann et al.: Constituents of Matter: Atoms, Molecules, Nuclei, and Particles. CRC Press. ISBN 0-8493-1202-7.
- Chemistry Explained Chemistry Encyclopedia
- For a qualitative discussion see, for example, John W. Moore, Conrad L. Stanitski, Peter C. Jurs (2009). “Coupling reactant-favored processes with product-favored processes”, Principles of Chemistry: The Molecular Science. Cengage Learning, pp. 633 ff. ISBN 0495390798. and Eric V. Anslyn, Dennis A. Dougherty (2006). “§2.1.2: Types of energy”, Modern physical organic chemistry. University Science Books, pp. 68 ff. ISBN 1891389319.
- Mark Rosengarten (2009). Chemistry tutorial 9.03: Entropy, enthalpy and spontaneous reactions. Regents chemistry tutorials on YouTube. Retrieved on 2011-04-06.
- Examples of Organic Reactions Virtual Textbook of Organic Chemistry, William Reusch, Emeritus Professor, Department of Chemistry, Michigan State University. Scroll down to section on "Activation Energy".
- Frederick A. Bettelheim, William H. Brown, Mary K. Campbell and Shawn O. Farrell (2009). Introduction to General, Organic and Biochemistry. Brooks/Cole, Cengage Learning, pp.215-216. ISBN 0-495-39112-3. Partally available here at Google Books.
- Paul Collison, David Kirkby and Averil Macdonald (2003). “§10.08: Energy changes in chemical reactions”, Nelson Modular Science, Book 2. Nelson Thornes Ltd., pp. 151 ff. ISBN 0748767797.
- In the following chemical equations, (aq) indicates an aqueous solution, (g) indicates a gas and (s) indicates a solid. Superscripts with a positive sign (+) indicate an cation and superscripts with a negative sign (−) indicate an anion.
- H. Stephen Stoker (2009). “Chapter 9: Chemical reactions”, General, Organic, and Biological Chemistry, 5th Edition. Brooks/Cole: Cengage Learning, pp. 223 ff. ISBN 0547152817.
- Mark S. Cracolice and Edward I. Peters (2009). “§2.8: Characteristics of a chemical change”, Introductory Chemistry: An Active Learning Approach, 4th Edition. Brooks/Cole: Cengage Learning, pp. 77 ff. ISBN 0-495-55847-8. |
About 177 AD the Greek philosopher Celsus, in his book ‘The True Word’, expressed what appears to have been the consensus Jewish opinion about Jesus, that his father was a Roman soldier called Pantera. ‘Pantera’ means Panther and was a fairly common name among Roman soldiers. The rumor is repeated in the Talmud and in medieval Jewish writings where Jesus is referred to as “Yeshu ben Pantera”.
In 1859 a gravestone surfaced in Germany for a Roman soldier called Tiberius Iulius Abdes Pantera, whose unit Cohors I Sagittariorum had served in Judea before Germany – romantic historians have hypothesized this to be Jesus’ father, especially as ‘Abdes’ (‘servant of God’) suggests a Jewish background.
- Tib(erius) Iul(ius) Abdes Pantera
- Sidonia ann(orum) LXII
- stipen(diorum) XXXX miles exs(ignifer?)
- coh(orte) I sagittariorum
- h(ic) s(itus) e(st)
- Tiberius Iulius Abdes Pantera
- from Sidon, aged 62 years
- served 40 years, former standard bearer (?)
- of the First Cohort of Archers
- lies here
The gravestone is now in the Römerhalle museum in Bad Kreuznach, Germany.
It appears this First Cohort of Archers moved from Palestine to Dalmatia in 6 AD, and to the Rhine in 9 AD. Pantera came from Sidon, on the coast of Phoenicia just west of Galilee, presumably enlisted locally. He served in the army for 40 years until some time in the reign of Tiberius. On discharge he would have been granted citizenship by the Emperor (and been granted freedom if he had formerly been a slave), and added the Emperor’s name to his own. Tiberius ruled from 14 AD to 37 AD. Pantera’s 40 years of service would therefore have started between 27 BC and 4 BC.
As Pantera would probably have been about 18 when he enlisted, it means he was likely born between 45 BC and 22 BC. He could have been as old as 38 or as young as 15 at the time of Jesus’ conception in the summer of 7 BC.
In 6 AD when Jesus was 12, Judas of Galilee led a popular uprising that captured Sepphoris, the capital of Galilee. The uprising was crushed by the Romans some four miles north of Nazareth. It is possible (and appealing to lovers of historical irony) that Pantera and Joseph fought on opposite sides. As Joseph is never heard of again he may well have been killed in the battle, or have been among the 2,000 Jewish rebels crucified afterwards.
So Tiberius Iulius Abdes Pantera is indeed a possibility as Jesus’ father. The only thing we know for certain is that Mary’s husband Joseph wasn’t the father, and that Mary was already pregnant when they married. It could have been rape, or Mary may have been a wild young teen who fell for a handsome man in a uniform, even if he was part of an occupying army. It happens. |
A new breakthrough may mean genetic optimization that green NGOs won't have a reason to ban, like mutagenesis genetic engineering and copper sulfate pesticides, which are currently organic certified.
The new work found that thylakoids, membrane networks key to plant photosynthesis, also function as a defense mechanism to harsh growing conditions. Thylakoids contain grana, structures resembling stacked coins that expand and contract when water flows in and out, like the bellows of an accordion. The action mirrors the movement of guard cells, structures on plant leaves that act like accordion buttons, allowing carbon dioxide in and water vapor out.
The bellows-like action of the thylakoid membrane inside plant chloroplasts harmonizes the flow of electrons to power photosynthesis. A team of scientists led by ORNL theorize the thylakoid can help plants respond to stressful growing conditions such as drought. Credit: Nathan Armistead and Jacquelyn DeMink/ORNL, U.S. Dept. of Energy
These structures harmonize the flow of electrons with carbon uptake during photosynthesis. Scientists have questioned why such a complicated network exists in hardier plants.
If that helps plants tolerate fluctuating conditions such as too little or too much water and sunlight, it may be a real boon for people who weren't lucky enough to be born in natural breadbaskets like France and need progressive help. |
DIABETES, WHETHER TYPE 1, 2, or even gestational, makes it more difficult to maintain good dental health. There is a reciprocal relationship between oral health and diabetes, meaning that it’s harder to keep your teeth and gums healthy if you aren’t carefully managing the diabetes, but the diabetes also becomes harder to control if you aren’t prioritizing oral health.
An Overview of the Types of Diabetes
All three types of diabetes impact oral health, but they work in different ways. Type 1 diabetes is usually diagnosed early in life, and it involves the pancreas being unable to produce insulin. Type 2 diabetes (up to 95% of cases) is usually diagnosed decades into adulthood, and it involves the body failing to use insulin efficiently to regulate blood sugar. Gestational diabetes affects some pregnant women, who become less able to regulate blood sugar during pregnancy.
What Does Blood Sugar Have to Do With Oral Health?
Sugar is very harmful to teeth and gums because it’s what oral bacteria love to eat. Sugar in the bloodstream is also a problem, which is where diabetes comes in. High blood sugar is rough on the immune system and makes it hard for the body to fight back against pathogens — including oral bacteria. It leaves diabetic patients more vulnerable to oral inflammation and tooth decay.
Gum Disease and Diabetes
Over 20% of diabetics develop a form of gum disease anywhere from the early stages of inflammation (gingivitis) to advanced gum disease (periodontitis) that threatens the teeth, the gums, and even the supporting bone. Untreated gum disease can take a toll on overall health and even become life-threatening if the bacteria reach the bloodstream.
Gum disease symptoms to watch for include chronic bad breath, the gums becoming swollen, red, and prone to bleeding, receding gums, and loosening of the teeth. Any one of these symptoms could indicate poor gum health, and diabetes increases the risk of other problems such as slower healing, worse and more frequent infections, dry mouth, enlarged salivary glands, fungal infections, and burning mouth syndrome.
Diabetes Can Complicate Orthodontic Treatment
No matter what’s causing it, gum disease can present a challenge for orthodontic treatment. Parents of kids with type 1 diabetes should take extra care to help them keep their diabetes under control and to promote good oral health. Then, if they need braces, their treatment will be able to go forward and they will be able to enjoy the benefits of properly aligned teeth.
Controlling Diabetes Leads to Better Oral Health Outcomes
Diabetes adds a complication to many elements of daily life, but it is perfectly possible to reach and maintain good oral health while diabetic. Good oral hygiene habits like daily flossing, twice-daily brushing with fluoride toothpaste and a soft-bristled toothbrush, and regular dental checkups are all essential. So is being careful with sugar intake!
The Dentist Can Help You Fight Diabetes!
Regular dental exams are essential for everyone, but especially for anyone with diabetes. The early signs of a dental problem aren’t always obvious to people who don’t work in the dental field. The sooner they can be caught by a dentist, the easier it will be to deal with them. Your physician can also work with your dentist towards the shared goal of managing your diabetes as well as your oral health. That’s why it’s so important to keep both of them in the loop! |
Here are some activities for teaching More, Less, and Same in Pre-K and Preschool.
Find more math ideas on the Math Resource Page
Pocket Chart Graph
We make several pocket chart graphs during the year. Sometimes we make a graph where children choose their favorite thing (for example, their favorite ice cream flavor). Sometimes we make a graph where children are asked a question with a yes or no answer (for example, “Do you like pizza?”). We often graph to see how many people did or did not like the book we read that day.
This is a game played with a small group of children. The group is divided into two teams. Each team has a giant game die. One child on each team tosses their dice and says the amount. The group decides which die has the most dots.
Print and make your own Giant Dice at this link.
Spray paint lima beans with two colors so that they have one color on each side. Place ten beans in a cup. Children dump the beans onto a mat (I used a sheet of craft foam for the mat). They count each color to see how many beans landed on the red side and how many landed on the blue side. They compare to see which colors have the most, least, or same amount.
Regular playing cards can be used for this game or they can be made with stickers or stamps. Children play this game with one partner. Each child should have a set of cards that represent numbers 1-10, and each child’s cards should be different in some way (e.g. a different color or different picture). To play the game, each child takes the first card from their stack and places it on the table. The children determine which card has the most or same amount. The child whose card has the most wins that round and gets to keep both cards. If the cards are the same, they tie and each child keeps their own card. I have the children place the cards they win in a plastic basket so they won’t get mixed up with their other cards. At the end, the children can count to see how many cards they won, but my students seem to enjoy the game more if we don’t determine who won or lost at the end.
Ice Cube Tray Graph
We use an ice cube tray for a hands-on graph. I place several kinds of counters into a sorting tray. You can use counters of different types or all one type but different colors. Children roll a game die, determine the amount, and count out that amount of counters to place in the graph. I teach them to start at the bottom of the graph and go up the column when they place the counters. They roll the die a second time, determine the amount and place a different type of counters in the second column of the graph. Children look at the graph to determine which has the most, least, or same amount.
Block Building Game
Children roll a game die, determine the amount, and count out that many wooden cubes to stack into a tower. The die is rolled again to make a second tower. The children compare the towers to see which has the most, least, or same amount.
Don’t miss the math resource page! |
A chance fossil discovery in Montana a decade ago has led to the identification of an audacious new species of horned dinosaur, Spiclypeus shipporum, according to a study published May 18th, 2016 in the open-access journal PLOS ONE by Jordan Mallon, from the Canadian Museum of Nature, Canada, and colleagues.
The new dinosaur, Spiclypeus shipporum, is described from bones representing the skull, part of the legs, hips and backbone of an individual preserved in a silty hillside that once formed part of an ancient floodplain. While the fossil now has a scientific name, it is more commonly known by its nickname “Judith,” after the Judith River geological formation where it was found. What sets Spiclypeus shipporum apart from other horned dinosaurs is the orientation of the horns over the eyes, which stuck out sideways from the skull, and a unique arrangement of bony “spikes” that emanated from the margin of the frill–some spikes curled forward while others projected outward.
Close examination of some of its other bones may also suggest a life lived with pain. Judith’s upper arm bone (humerus) showed distinct signs of arthritis and bone infection. Despite this trauma, analysis of the annual growth rings inside the dinosaur’s bones suggested that it lived to maturity, and would likely have been at least 10 years old when it died.
There are now nine well-known dinosaur species from Montana’s Judith River Formation, some of which were also found in Alberta, while others such as Spiclypeus are unique to Montana. The authors note that none of the species have been found in more southerly states, suggesting that dinosaur faunas in western North America may have been highly localized about 76 million years ago. Jordan Mallon’s prior research has shown that such species-rich communities may have been enabled by dietary specializations among the herbivores, a phenomenon more commonly known as niche partitioning.
“This is a spectacular new addition to the family of horned dinosaurs that roamed western North America between 85 and 66 million years ago,” explains Mallon, who collaborated with researchers in Canada and the United States. “It provides new evidence of dinosaur diversity during the Late Cretaceous period from an area that is likely to yield even more discoveries.”
The name Spiclypeus is a combination of two Latin words meaning “spiked shield,” referring to the impressive head frill and triangular spikes that adorn its margins, and the name shipporum honors the Shipp family on whose land the fossil was found near Winifred, Montana by Dr. Bill Shipp.
“Little did I know that the first time I went fossil hunting I would stumble on a new species,” explains Shipp, a retired nuclear physicist who became a fossil enthusiast after moving to his dinosaur rich area of Montana. “As a scientist, I’m really pleased that the Canadian Museum of Nature has recognized the dinosaur’s value, and that it can now be accessed by researchers around the world as part of the museum’s fossil collections.” |
From Merriam-Webster dictionary:
Carbon dioxide. A heavy colorless gas CO2 that does not support combustion, dissolves in water to form carbonic acid, is formed especially in animal respiration and in the decay or combustion of animal and vegetable matter, is absorbed from the air by plants in photosynthesis, and is used in the carbonation of beverages.
Carbon. A nonmetallic chiefly tetravalent element found native (as in diamond and graphite) or as a constituent of coal, petroleum, and asphalt, of limestone and other carbonates, and of organic compounds or obtained artificially in varying degrees of purity especially as carbon black, lampblack, activated carbon, charcoal, and coke.
It should be pretty clear that carbon dioxide (CO2) and carbon are not the same things. CO2 is a colorless and odorless gas emitted when all animals exhale, and it is given off when formerly living things—plants and animals—decay. Not mentioned by Merriam-Webster is that it is also generated when these formerly living things, even after millions of years (as with the case of coal and oil) are burned. Also not mentioned by the online dictionary is the fact that carbon dioxide, in addition to its role in photosynthesis, is what is called a “greenhouse gas.” That is, in conjunction with other gases, water vapor being the most dominant, acts as a blanket that keeps heat from escaping the atmosphere and prevents the planet from freezing over. Carbon dioxide is essential for sustaining life.
Carbon, on the other hand, is a solid. It is present in all living things and things that were once living. As noted, when these once-living things that contain carbon are burned, the carbon combines with oxygen to form carbon dioxide and is emitted into the atmosphere along with the CO2 that we exhale or that comes from decaying plant life. So, while it is one of the elements that that goes into forming carbon dioxide, it is no more the same as carbon dioxide than hydrogen or oxygen is the same as water. Water being two parts hydrogen (H2) and one part oxygen (O).
So, why, in nearly all discussions of global warming, is the word “carbon” freely substituted for “carbon dioxide?”
“Carbon emissions,” “carbon taxes,” “carbon footprint,” and “cost of carbon” all use the word carbon when actually what is being referred to is carbon dioxide. Carbon cannot logically be a substitute expression for carbon dioxide any more than hydrogen or oxygen can be a substitute expression for water. In the latter case, everyone would realize that such a substitution would be silly and just bad science. And yet, when it comes to talking about carbon dioxide and carbon, the switch in terminology is made seamlessly and really without notice. In fact, it is so common to refer to CO2 as carbon that it is done regularly by people on all sides of the global warming debate and even by government agencies, as with the expression “cost of carbon” used to describe the EPA’s estimate of how much emissions of carbon dioxide in the atmosphere are estimated to cost society.
My speculation is the transformation of the legitimate and accurate “carbon dioxide” into the false and inaccurate “carbon” has been part of an effort, in this case, a quite successful one, to obfuscate what is really talked about. It is a tool of propaganda. Note that this has also been done with the substitution of “climate change” for “global warming” despite the fact that what is referenced is the warming of the planet, not any general change in climate that occurs all the time.
The word carbon congers up images, captured in Merriam-Webster’s definition, that are completely unrelated to carbon dioxide. In most people’s minds carbon is associated with everything that is black and dirty—coal, charcoal, asphalt, carbon black, etc. Of course, none of these describe CO2, which is a colorless and odorless gas. If one wants the vision of black soot being emitted from power plant smokestacks and automobile tailpipes, then CO2 emissions will just not work. But, of course, that is exactly what carbon emissions would look like. And it would hardly make sense to talk about colorless, odorless, and harmless CO2 emissions as contributing to “dirty air,” which is done regularly. But carbon? Now that’s something quite different. It’s black, it’s “sooty,” and if it were being emitted into the air as a result of heating our homes or driving our cars, it would make the air “dirty” by any standard.
In discussions of global warming, we need to take back the language for sound science. Intentionally or not, it is deceptive and misleading to refer to carbon dioxide as carbon. As noted, this is not just because they are, in fact, not the same thing but because they bring to mind two very different notions. The use of the word carbon allows special interest groups to transform the simple science into a tool of propaganda. Honest protagonists on either side of the global warming debate, particularly those who actually care about scientific accuracy, should make a concerted effort to reject substituting the word carbon for carbon dioxide. They should not only stop making the switch themselves but begin calling out others who continue the charade. |
MIT chemists have devised a way to trap carbon dioxide and transform it into useful organic compounds, using a simple metal complex.
More work is needed to understand and optimize the reaction, but one day this approach could offer an easy and inexpensive way to recapture some of the carbon dioxide emitted by vehicles and power plants, says Christopher Cummins, an MIT professor of chemistry and leader of the research team.
“Ideally we’d like to develop carbon-neutral cycles for renewable energy, to get carbon dioxide out of the atmosphere and avoid pollution,” Cummins says. “In addition, since producers of oil have lots of carbon dioxide available to them, companies are interested in using that carbon dioxide as an inexpensive feedstock to make value-added chemicals, including things like polymers.”
The new reaction transforms carbon dioxide into a negatively charged carbonate ion, which can then react with a silicon compound to produce formate, a common starting material for manufacturing useful organic compounds. This process, which the researchers describe in the journal Chemical Science, relies on a very simple molecular ion known as molybdate — an atom of the metal molybdenum bound to four atoms of oxygen.
Scientists have long sought ways to convert carbon dioxide to organic compounds — a process known as carbon fixation. Noble metals such as ruthenium, palladium, and platinum, which are relatively rare, have proven effective catalysts, but their high price makes them less attractive for large-scale industrial use.
As an alternative, chemists have tried to make abundant metals, such as copper and iron, behave more like one of these powerful catalysts by decorating them with molecules that alter their electronic and spatial properties. These molecules, known as ligands, can be very elaborate and usually contain nonmetallic atoms such as sulfur, phosphorus, nitrogen, and oxygen.
With most of those catalysts, the carbon dioxide binds directly to the metal atoms. Cummins was curious to see if he could design a catalyst where the carbon dioxide would bind to the ligand instead. “That would set the stage for chemical transformations of carbon dioxide that might be different from what people had seen before,” he says.
After finding some success with metal complexes consisting of either niobium or titanium bound to ligands consisting of large organic molecules, Cummins decided to try something simpler, without unwieldy ligands. “It occurred to me that there was no reason why these bulky organic ligands would be a requirement for carbon dioxide binding. I wanted to see if we could find something really simple that would exhibit similar reactivity,” he says.
A simple catalyst
Molybdate, which is relatively abundant and stable in air and water, seemed like it could fit the bill. A simple tetrahedron with four atoms of oxygen bound to a central molybdenum atom, molybdate is commonly used as a source of molybdenum, which can catalyze many types of reactions. Until now, no one had studied its interactions with carbon dioxide.
Working with molybdate dissolved in an organic solvent that also contained dissolved carbon dioxide, the researchers found that the ion could bind to not one, but two molecules of carbon dioxide. The first carbon dioxide attaches irreversibly to one of the oxygen atoms bound to molybdenum, creating a carbonate ion.
A second molecule of carbon dioxide then binds to another oxygen atom, but this second binding is reversible, which could enable potential applications in carbon sequestration, Cummins says. In theory, it could allow researchers to create a cartridge that would temporarily store carbon dioxide emitted by vehicles. When the cartridge is full, the carbon dioxide could be removed and transferred to a permanent storage location.
Another possible application would be transforming the carbon dioxide to other useful compounds containing carbon. Cummins and his colleagues showed that the trapped carbon dioxide could be converted to formate by treating silicon-containing compounds called silanes with the molybdate complex.
“This is a really elegant addition to the carbon dioxide fixation literature because it shows that some really beautiful transformations are achievable without an elaborate ligand system,” says Christine Thomas, an associate professor of chemistry at Brandeis University who was not involved in the research.
More research is needed before the reaction can become industrially useful, Cummins says. In particular, his lab is investigating ways to perform the reaction so that molybdate is regenerated at the end, allowing it to catalyze another reaction.
“The big advance of the present work is just showing that molybdate takes up carbon dioxide in the way that it does, and illustrating in detail the structures that are produced by addition of carbon dioxide to molybdate,” Cummins says. “Hopefully it’s going to be a little bit thought-provoking and cause people to take a step back and consider just what we’re going to need to do.”
The paper’s lead author is graduate student Ioana Knopf; other authors are former visiting student Takashi Ono, former postdoc Manuel Temprado, and recent PhD recipient Daniel Tofan. The research was funded by the Saudi Basic Industries Corporation; the Spanish Ministry of Education, Culture and Sport; the Spanish Ministry of Economy and Competitiveness; and the National Science Foundation. |
Creative Spaces Preschool follows a child-centered/play-based program where children choose activities based on their current interests. Teachers encourage the kids to play, facilitating social skills along the way. The play-based classroom is broken up into sections, which include:
Small group areas
Library and reading area
Space with blocks and other toys
Plus other areas
It may seem as though children are just playing, however, they are learning valuable skills, including important social skills and cooperation with others, learning about signs (as most items are labeled), and early math.
Creative Spaces is built into a framework – inside this framework we try to introduce flexibility, fluidity and spontaneity, for the children and also for the staff.
Kids get firm boundaries, but lots of freedom within those boundaries. When a child plays, he constructs himself.
Creative Spaces – a place for energetic discovery in which children are left to exercise their five senses, of using their muscles, of sensations, and of physical space.
Children choose activities based on their current interests. |
Why can’t penguins fly?
April 1, 2009
Well, in a sense they really do fly, only through the water, not through the air. Penguins have strong wings and strong pectoral muscles to power them. Their bodies are streamlined as if for flight, so they still cut cleanly through the water. But water is much thicker than air, so their wings are shorter and stiffer than a normal bird’s wings. In fact, penguins are the only birds that are unable to fold their wings. Their wing bones are fused straight, making the wing rigid and powerful, like a flipper.
By the same token, penguins aren’t nearly as concerned about being light as birds that fly through the air. To dive deep, to catch fast-swimming prey, and to survive frigid temperatures, their bodies have huge fat supplies, heavy muscles, and densely packed feathers. There’s no way they could fly with such short wings and heavy bodies.
Penguins are an interesting example of specialization versus compromise. By giving up on flight they’ve been free to evolve bodies that perform superbly underwater. The similar-looking murres and guillemots of the Arctic can still fly, just not as well as some other birds; and they can also swim, though not as well as penguins.
All About Birds is a free resource
Available for everyone,
funded by donors like you |
Age Range: 7 to 11
Friendly Facts: A Fun, Interactive Resource to Help Children Explore the Complexities of Friends and Friendship$60.95, Softcover
Add to cartQuestions?
Making and keeping friends doesn't come easy for children with autism spectrum disorders. Many children need to be taught a range of strategies directed at expanding their social understanding skills, such as reading facial expressions and body language.
Friendly Facts, an interactive workbook aimed at children ages 7-11, addresses these challenges by breaking down the complex concept of making friends into simpler ideas. Through fun, engaging activities, children gain real-life knowledge of the major "secrets" of making and keeping friends.
By gaining the foundation for making and keeping friends at a crucial age, children are better prepared for successfully interacting with others for the rest of their lives. |
Working together to achieve success for every child
Greetings! – It’s our 9th edition!
We write a regular newsletter keeping you up-to-date and informed about your child’s learning. This will provide you with a good understanding of happenings in Year 2 and how you can support these learning experiences.
Let’s Inquire – How We Express Ourselves
Central Idea: People express themselves through celebrations.
An inquiry into:
*The reasons why people celebrate (causation)
*How celebrations are similar and different (connection)
*Different ways people express celebrations (perspective)
Transdisciplinary Learning – It’s all connected!
Click the video camera to get an active glimpse of Year 2 learning.
Since returning from the holidays, Year 2 students have begun exploring how we express ourselves through celebrations. They thought about what the central idea means by using dictionaries to uncover word meanings. Some students re-wrote the central idea to make it easier to understand: “People share their feelings or ideas by doing something special and fun for important events or days.”
Through stories such as “The Sandwich Swap” and “When Pigasso Met Mootise”, classes are also beginning to understand what it means to be open minded…willing to listen to and consider other people’s ideas and points of view.
Students experienced some celebrations first-hand by taking part in mini celebrations in different classrooms. Whether walking down the wedding aisle, mixing spooky brews or playing party games, students inquired about why and how celebrations take place.
Students are enjoying trying out traditions linked to different celebrations.
Math is heavily linked to our Unit of Inquiry learning in the way of data handling. Students have begun to develop the skills of collecting and organising data by surveying classmates about favourite celebrations, popular traditions, celebrations that were important for their families, etc.
Outside of the Unit of Inquiry, students also explored math language related to position and direction through coding. Students created written algorithms (e.g. Go forward 4 squares, etc.) to navigate across a map, guide their classmates through an obstacle course and send a Beebot (bee robot) along student-made paths.
As a grade level, we are working on how to find the big ideas and important details when reading either fiction or nonfiction. These skills will particularly help them throughout their learning as they read for information to find out more about different celebrations.
In writing, students used what they have learned so far about sentences to write learning reflections and to write recounts of their holiday highlights. They were encouraged to zoom in so they could add interesting details about the most significant events.
Students use cooperation, mathematical language and problem solving to
program and debug blocks, friends and beebots.
Suggestions for supporting learning at home
*Discuss the celebrations that are most important to your family. Why do you celebrate them? Why are they important to you? What do you do to celebrate and why?
*Visit your local library and look for fiction and non-fiction books about celebrations.
*Cook or create something for your family’s next celebration.
*Look for examples of real life data handling. Where do you see surveys, tallies, graphs,etc.? How are they used?
Friday, Jan. 17th: Chinese New Year assembly (wear red or traditional lunar new year clothes)
Friday, Jan. 17th: School Disco
Thursday, Jan. 23rd: Last day before mid-term break (full day of school)
Jan. 24th-31st: Mid Term Break
Wednesday, Feb. 5th: Registration for school fair auditions
Friday, Feb. 14th: CPD (no school)
Please remember that February 3rd IS NOT a CPD day (as printed in the diary).
Students should attend school as usual.
If you have questions or concerns, contact your child’s teacher. We welcome feedback and look forward to working with you to achieve success for every child.
The Year 2 Team |
An early hominid named Little Foot has been dated to 3.67 million years old, making the timeline of human evolution even more complicated.
Little Foot was found in a cave at Sterkfontein in South Africa, an hour’s drive from Johannesburg in 1980, and identified as hominid in 1994. According to co-author Ronald Clarke, a professor at the Evolutionary Studies Institute at the University of Witwatersrand and Little Foot’s discoverer, the name came from a play on the mythical Big Foot, and because four foot bones were discovered first.
The dating would muddle the standard view of human evolution. Based on the results, published in Nature on April 1, Little Foot is a million years older than any hominid found in South Africa, suggesting far more geographic diversity for our oldest ancestors. Generally, it is thought that humans derived from the species Australopithecus afarensis, remains of which have been found in Kenya and Ethiopia (the partial skeleton known as “Lucy” being the most famous). Little Foot’s dating, if accurate, challenges that narrative.
“It means that later hominids did not only derive from Australopithecus afarensis,” Clarke says. Little Foot was morphologically different from the hominids of East Africa living at the same time. “There could well have been other species in different parts of Africa living at the same time but not yet discovered.” Clarke believes Little Foot, a mature female about 1.25-1.5 meters tall, fell into the cave. He says she is part of a new species known as Australopithecus prometheus.
It took 16 years to get the entire skeleton removed from the rock and sand; it was embedded in concrete-like sediment and sealed in flowstone (stalagmite) in the cave. “The bones were broken up and scattered within the deposit and were in very soft, fragile condition,” says Clarke.
Sterkfontein has been the site of archeological digs since the mid 1930’s. But because of the region’s difficult geology, dating fossils found there has been wrought with controversy. In 1999, scientists estimated that Little Foot lived about 3.3 million years ago, but dating of some of the calcium flowstones surrounding the fossil suggested it may be closer to 2.2 million years old.
“The date was questioned because it was always possible the fine sand that came into the cave came from elsewhere in the cave … and was washed in,” says Darryl Granger, a co-author of the Nature paper and a professor of earth, atmospheric and planetary sciences at Purdue University in Indiana.
The recent dating entailed a new technology that Granger developed. Called isochron burial dating, the technique uses the radioactive decay of isotopes of aluminum-26 and beryllium-10 found in the quartz in the rocks, as well as samples of rocks and sand from around a fossil. The method can determine whether a sample has been undisturbed since its burial with the fossil, or whether it was deposited later.
Despite the use of isochron burial dating, some still have doubts. “I would like to see biochronological validation of these new age estimates,” says Tim White, a professor of human evolutionary studies at the University of California at Berkeley. White sees further work ahead even if the results hold. “Their bearing on the significance of the hominid remains will remain unknown until anatomical studies have been completed,” he says. |
Early Years Foundation Stage / Popular Early Years Themes teaching resources
4,465 Free teaching resources: lesson plans, worksheets, teaching ideas and much more.
Last updated: 05 December 2013
Whether you're looking for something on superheroes, activities about animals or worksheets based around weather, we're confident one of our free teaching resources will be right for your class.
Highly rated resources
- Addition and Subtraction - (Activity) Treasure Hunt Board Game by bevevans22
- Colour and Pattern - (Game, Puzzle, Quiz) The Rainbow Fish Maths Game by bevevans22
- Counting - (Activity) mathematics - song booklet by nikkijc
- Colour and Pattern - (Activity) Ourselves by bevevans22
- Addition and Subtraction - (Activity) Jack and the Beanstalk Maths Game by bevevans22
- Homes Colouring and Counting Cards by lbrowne
- Deer Family Counting by lbrowne
- My Pet Read and Colour by lbrowne
- Animated "Shapes Song For Kids" | Children Educati by preeti_nair
- What jobs do Santa's Elve's do? (Christmas/Winter) by Joe Doe
- Tracing Sheets for Fine Motor Physical Development by oceanic-dolphin |
Lesson 1.6: Signs of Comparison
Review - Signs of Comparison
The greater than and less than symbols are signs of comparison just like the equals sign. The greater than and less than symbols are two-cell symbols. A blank space should be left before and after the greater than or less than symbol. If a number follows the greater than or less than symbol, it must be preceded by the numeric indicator.
Inequalities show differences between numbers and indicate which one is larger or smaller. The symbol for greater than, dots four six dot two, and the symbol for less than, dot five dots one three, are used to show inequalities. The cell containing only one braille dot is always pointing to the value that is less than the other. For example, in "nine is greater than four", the single dot two is pointing toward the lesser value, four. Likewise, in "four is less than nine", the single dot five is also pointing toward the lesser value, four.
nine greater than four
four less than nine
nineteen greater than ten
twenty five less than thirty nine |
- Review Process
- Take Action
- A project of
This beautifully illustrated book introduces students to Viola Desmond, an African Canadian businesswoman who challenged racial segregation in Nova Scotia. Viola was arrested when she refused to move out of the whites-only section of the Roseland Theatre in New Glasgow. Upon her release she began a courageous fight against prejudice and inequality. This story about an ordinary citizen defying racism is incredibly inspiring as it teaches students that any individual can help end social injustice in their community.
This book complements units that explore the meaning of citizenship and human rights. After reading the book students could develop their own charts of fundamental human rights. Older students could further research Viola Desmond and other Canadian civil liberties advocates and prepare a school awareness campaign that informs others about these leaders of social reform. Students could also compare the events of this book to a form of segregation in their own lives- bullying. The class could follow the lead of Viola by taking a stand against bullying with activities such as developing a peer buddy system or establishing a school diversity club.
The following tool will allow you to explore the relevant curriculum matches for this resource. To start, select a grade listed below. |
Tomato blight, or late blight diseases (Phytophthora infestans), is a major issue for growers across the world. It can destroy crops and by the time it has infected your plants, it is too late for you to grow any more and your entire harvest is lost. Worse, late blight also affects potatoes, another crop commonly grown at home.
How To Spot Tomato Blight
Tomato blight cannot be spotted before it has infected and started killing your plants. It is a nightmare of a disease and can quickly ravage your plants, destroying all of your hard work. Tomato blight is a fungal disease that spreads rapidly through the fruit and foliage of tomatoes (and potatoes) in wet weather. Initially, it browns the leaves and stems but eventually causes them to collapse and decay.
Green fruit develop brown patches when they are infected with blight and ripe fruit decays very quickly. Even if you pick the fruit immediately, it will still rot very quickly.
How Tomato Blight Spreads
Tomato blight is quite common in outdoor grown tomatoes, but less common in those grown in greenhouses. This year, my outdoor tomatoes succumbed to blight in the middle of August, but by the end of September, those in the greenhouse were still going strong with no sign of blight.
This fungal disease spreads by spores in water, which is why you commonly see it in damp or wet weather. It spreads on the wind and from water splashes. The spores can travel over 30 miles on the wind, which is why even the most remote gardener can find their plants infected with this disease. It can start infected plants from around June time, in warmer areas, but it usually at its worst in late July and August, though now it is still infecting plants in September.
Avoiding Tomato Blight
As blight is spread by the wind, there is very little you can do to prevent blight. Unfortunately, if you are going to get it, then you are going to get it. However, there are some things you can do that will minimise the risk of it killing all of your tomato plants.
Firstly, if you can, grow your tomatoes in a greenhouse. As they can’t get rained on in a greenhouse, blight struggles to infect indoor grown plants. Just make sure, and this applies to outdoor grown tomatoes too, that there is sufficient spacing between your plants so the air can circulate and the plants can dry out. Also, be careful you do not bring the spores into the greenhouse on damp clothing or footwear, so avoid going to your greenhouse after visiting your outdoor tomatoes and potatoes. I always deal with my greenhouse first when visiting my allotment and then the rest of the plot so that I do not accidentally bring any pests or diseases into the greenhouse.
Water splashing helps to spread blight, so water at the base of your plants, avoiding getting the leaves wet and keep the watering can close to the ground to minimise splashing. Water first thing in the morning so the plants have plenty of time to dry off during the day and are not damp over night.
Feed your plants with a high potassium feed or a dedicated tomato feed rather than a generic feed high in nitrogen. High nitrogen fertilisers encourage leaf growth which increases the chance of blight.
As the plant starts to grow and produce tomatoes, remove the lower leaves so they cannot be splashed and remove some of the higher leaves to improve air circulation. If you are growing indoors, keep the plants well ventilated and do not allow water to pool, particularly near the plants.
Removing (and destroying) infected plant material at the first signs of infection will help to slow the progress of this disease, but it will not stop it as the rest of the plant is likely to be infected.
A 3-4 year crop rotation plan can help to reduce the risk of infection as it prevents the spores building up in the soil.
Avoid planting tomatoes outdoors anywhere near potatoes as blight infects both crops. Plant your tomatoes as far away from your potatoes as you can to reduce the risk of the plants infecting each other. Check your plants regularly from the start of summer for blight and immediately remove and destroy any infected plants.
Outdoor tomatoes can be covered with a tarp or other plastic root to stop the rain from hitting them. This will reduce the risk of infection by reducing water splashes. Mulching around the base of the tomato plants will also prevent splashing so can help reduce the risk of infection. Make sure your tomato plants are not trailing along the ground and are growing up canes.
Any infected plant material should be removed from your vegetable garden and destroyed. This includes any fallen leaves or tomatoes. Do not compost it as you run the risk of the spores living in the compost.
Early tomatoes, such as cherry varieties, ripen quicker than the larger varieties such as beefsteaks. This means they are less likely to be infected with blight because they are usually harvested before blight has taken hold.
Other Types of Blight
As well as late blight, which you know now all about, there is also early blight. Symptoms for this disease appear just after the fruits start to form with some small, brown lesions on the lower leaves. These grow and take on a target type shape and the leaf starts to yellow, then brown and eventually falls off the plant. Early blight does not affect the fruit, but does kill the leaves meaning the tomatoes then suffer from sun scald.
Septoria Leaf Spot is similar to early blight, though there tends to be lots of small brown spots rather than just a few. This does not normally affect leaves either.
Both of these types of blight will survive the winter in the ground whereas late blight cannot overwinter in the soil. It is important you practise crop rotation and remove all dead plant material at the end of the growing season.
Treating Tomato Blight
Unfortunately, there are no chemical treatments available to home growers to treat blight. While some people spray their potatoes with a copper based fungicide, this does not kill the blight and only slows it down. Nowadays, most copper based fungicides have been withdrawn from sale.
Blight Resistant Tomato Varieties
While there are some blight resistant tomato cultivars on the market, you need to remember that the important word here is ‘resistant’. They are not immune, but resistant varieties can give the plants the time they need to mature, or at least produce green tomatoes that are usable. Blight tolerant cultivars are not as protected as resistant varieties, but will put up a good fight against blight.
Some blight resistant varieties include:
- ‘Berry’ – a cordon tomato producing very sweet cherry tomatoes that ripen early plus has some resistance to blight.
- ‘Consuelo’ – a very blight resistant F1 variety producing large cherry tomatoes. Each plant can produce up to 150 tomatoes.
- ‘Crimson Crush’ – an F1 cultivar producing a good crop of large, very tasty tomatoes on indeterminate plants with excellent blight resistance.
- ‘Fandango’ – a cordon tomato producing medium sized fruits with a very good flavour. It tolerates blight and is resistant to both fusarium and verticillium wilt.
- ‘Fantasio’ – an F1 cordon tomato that produces a heavy crop of medium sized fruits. It tolerates blight and is resistant to both fusarium and verticillium wilt.
- ‘Latah’ – a bush tomato producing large cherry tomatoes very early in the season.
- ‘Legend’ – a beefsteak variety with few seeds and excellent flavour that has some blight tolerance. This variety has been awarded the RHS Award of Garden Merit.
- Lizzano – an F1 variety that is excellent in containers or hanging baskets due to its trailing habit. The round, cherry fruits are very sweet and the plant has good blight tolerance.
- Losetto – an F1 variety with high blight tolerance that produces lots of red, sweet, cherry tomatoes.
- ‘Mountain Magic’ – a great F1 variety to grow that is resistant to cracking, late blight and early blight. It produces a good crop of medium sized, sweet tomatoes.
- ‘Red Alert’ – a very early tomato plant with few leaves, producing lots of small, sweet cherry tomatoes.
- ‘Solito’ – a semi-determinate F1 variety, specifically bred for blight resistance. It produces very sweet orange coloured fruits and is ideal for containers. |
I'm waiting for the results of this test.
Here's an explanation I found on the web:
"The tilt test is a test that assesses someone’s response to orthostatic stress. Orthostatic stress is the fancy medical term for standing upright. The tilt test is also known as a tilt table test because it involves being basically strapped to a table that can tilt to different angles. A lot of the interest in tilt table tests originated from the air force. Years ago it was reported that up to 25% of those in the air force would pass out if they were in a straight upright standing position for a prolonged period of time. It was known that the tilt test could induce passing out in a subset of people and so it became a tool with which to assess people at risk for this. Nowadays the tilt test is used widely to diagnose syncope. Syncope is the medical term for passing out.
- In the tilt test, patients are strapped to a tilt table. It is basically a table capable of being swiveled to different angles. At 90 degrees a person would be upright and at 0 degrees they would be laying completely flat.
- The patient is awake and alert during the test. The angle of table tilt chosen is usually between 60 to 80 degrees, so upright but not all the way.
- The test involves basically laying there and not moving to see what happens. Heart rate and blood pressure are monitored closely throughout."
I did not receive any spray or medication. |
Computers easily perform calculations that would be hardly possible for humans. When robots with computerized brains appear in the movies, they are often described as having a cold and cruel character. We may have such an image because the calculation is made by digital processing in which everything is divided into one (1) or zero (0). Would you be amazed to learn that such a cold computer could be surprised?
The main parts of a computer are so-called computer chips such as memory and processor. In appearance, the chip is a shiny, thin metal plate about 1cm square. Strictly speaking it is not metal, but it appears metallic and very beautiful. The photo is a rare opportunity to see the interior of the chip, which is usually sealed.
A computer chip is also called a "large-scale integrated circuit," Ea large number of tiny switches or "transistors" are integrated in the chip. The switch has two states, ON and OFF, which is why it goes well with digital. Computer chips process information by switching ON and OFF rapidly in accordance with the precisely drawn up design diagram.
A spacecraft carries a number of computer chips that process information from onboard sensors and commands from the ground. The chips should operate according to their design diagram but sometimes the chips do not function correctly. The first anomaly occurred on a certain satellite in 1975. Memory data deviated from the values specified in the design. What happened in that computer chip?
The universe is a vacuum, with no gravity and perhaps no other human life. Thus, the word suggests a place where nothing exists. In fact, however, something does exist and, above all, cosmic rays stand out.
Cosmic rays are in fact high-energy particles flying in the universe. If you look at their shiny trajectories, you may imagine lines. "Imagine" is a good word as they are too tiny to see. This is because they are atomic nuclei, the smallest constituent of matter. Cosmic rays or atomic nuclei are accelerated by the huge energy generated by celestial activity.
Draw a 1cm square on a piece of paper. If you were to go out into the universe carrying it, 100,000 cosmic rays would pass through the square within one second. Computer chips are exposed to such cosmic-ray showers.
Computer Chips are Amazed
What happens when a chip is struck by cosmic rays? Fig. 1 shows such a moment, and traces electrical signals from a computer chip like an electrocardiogram. When cosmic rays strike the chip it emits a large signal output. This is the moment the chip is amazed.
The above record was obtained by selecting a single transistor switch aligned on the chip. The switch was kept in the OFF state --- current does not flow in this state ---, set in a measurement system so that current flows when the switch is changed to ON. Naturally, the value shown on the monitor before the cosmic rays struck was zero, but when a cosmic ray strikes, large current flows. The experiment was performed by the Takasaki Advanced Radiation Research Institute, Japan Atomic Energy Agency. They produced artificial cosmic rays and bombarded the chip. The data were provided by the experimental team at the Institute.
Look carefully at the horizontal axis. The duration when current went through the switch (i.e., the period that the switch was amazed) was very short, one nano-second or one-billionth of a second. It is too short to measure with a stopwatch, so the team had to use an ultra-high speed digital oscilloscope. "Very fast" is, however, based on human criteria. It is not so fast for chips that operate at high-speed. Just like humans, shocked computer chips sometimes cannot move. There are cases where chips lost their memory because of a powerful shock. In fact, the chip malfunction onboard the satellite in 1975 was caused by this phenomenon. |
Research Finds “Tunable” Semiconductors Will Allow Better Detectors, Solar Cells
One of the great problems in physics is the detection of electromagnetic radiation – that is, light – which lies outside the small range of wavelengths that the human eye can see. Think X-rays, for example, or radio waves.
“This technology will also allow dual or multiband detectors to be developed, which could be used to reduce false positives in identifying, for example, toxic gases,” said Unil Perera, a Regents’ Professor of Physics at Georgia State University. Perera leads the Optoelectronics Research Laboratory, where fellow author and postdoctoral fellow Yan-Feng Lao is also a member. The research team also included scientists from the University of Leeds in England and Shanghai Jiao Tong University in China.
To understand the team’s breakthrough, it’s important to understand how semiconductors work. Basically, a semiconductor is exactly what its name implies – a material that will conduct an electromagnetic current, but not always. An external energy source must be used to get those electrons moving.
But infrared light doesn’t carry a lot of energy, and won’t cause many semiconductors to react. And without a reaction, there’s nothing to detect.
Until now, the only solution would have been to find a semiconductor material that would respond to long-wavelength, low-energy light like the infrared spectrum.
But instead, the researchers worked around the problem by adding another light source to their device. The extra light source primes the semiconductor with energy, like running hot water over a jar lid to loosen it. When a low-energy, long-wavelength beam comes along, it pushes the material over the top, causing a detectable reaction.
The new and improved device can detect wavelengths up to at least the 55 micrometer range, whereas before the same detector could only see wavelengths of about 4 micrometers. The team has run simulations showing that a refined version of the device could detect wavelengths up to 100 micrometers long.
Edmund Linfield, professor of terahertz electronics at the University of Leeds, whose team built the patterned semiconductors used in the new technique, said, “This is a really exciting breakthrough and opens up the opportunity to explore a wide range of new device concepts including more efficient photovoltaics and photodetectors.”
Perera and Lao have filed a U.S. patent application for their detector design.
“Tunable hot-carrier photodetection beyond the band-gap spectral limit” by Yan-Feng Lao, A.G. Unil Perera, L.H. Li, S.P. Khanna, E.H. Linfield and H.C. Liu is in the May issue of Nature Photonics.
The work was supported by the U.S. Army Research Office, the U.S. National Science Foundation, the UK Engineering and Physical Sciences Research Council, and the European Research Council Advanced Grant “TOSCA.” |
Cataract Treatment Surgery (IOL)
What is a cataract?
A cataract is a loss of transparency, or clouding, of the normally clear lens of the eye. As one ages, chemical changes occur in the lens that make it less transparent. The loss of transparency may be so mild vision is hardly affected or so severe that no shapes or movements are seen, only light and dark. When the lens gets cloudy enough to obstruct vision to a significant degree, it is called a cataract. Glasses or contact lenses cannot sharpen your vision if a cataract is present.
The most common cause of cataracts is aging. Other causes include trauma, medications such as steroids, systemic diseases such as diabetes and prolonged exposure to ultraviolet light. Occasionally, babies are born with a cataract.
Reducing the amount of ultraviolet light exposure by wearing a wide-brim hat and sunglasses may reduce your risk for developing a cataract, but once developed there is no cure except to have the cataract surgically removed. The time to have the surgical procedure is when your vision is impaired enough that it interferes with your lifestyle.
Cataract surgery is a very successful operation. One and a half million people have this procedure every year and 95% have a successful result. As with any surgical procedure, complications can occur during or after surgery and some are severe enough to limit vision. But in most cases, vision, as well as quality of life, improves.
Your eye works a lot like a camera. Light rays focus through your lens on the retina, a layer of light sensitive cells at the back of the eye. Similar to film, the retina allows the image to be "seen" by the brain. Over time the lens can become cloudy and prevent light rays from passing clearly through the lens. This cloudy lens is called a cataract.
The typical symptom of cataract formation is a slow, progressive, and painless decrease in vision. Other changes include: blurring of vision; glare, particularly at night; frequent eyeglass prescription change; a decrease in color intensity; a yellowing of images; and in rare cases, double vision.
Ironically as the lens gets harder, farsighted or hyperopic people experience improved distance vision and are less dependent on glasses. However, nearsighted or myopic people become more nearsighted or myopic, causing distance vision to be worse. Some types of cataracts affect distance vision more than reading vision. Others affect reading vision more than distance vision.
Intraocular Lenses (IOLs)
An intraocular lens (IOL) is a tiny, lightweight, clear plastic disk placed in the eye during cataract surgery. An IOL replaces the focusing power of the eye's natural lens.
The lens of the eye plays an important role in focusing images on the retina. If the lens loses its clarity, as it does when a cataract develops, light rays do not focus clearly and the image one sees is blurry. Glasses or contact lenses cannot sharpen vision if a cataract is present.
The only treatment for a cataract is to remove the lens and implant an IOL. Intraocular lenses have many advantages. Unlike contact lenses, which must be removed, cleaned, and reinserted, the IOL remains in the eye after surgery. Rapid evolution of IOL designs, materials, and implant techniques have made them a safe and practical way to restore normal vision after cataract surgery.
Traditional intraocular lenses typically correct distance vision but don’t allow correction for astigmatism or reading vision. We offer specially designed lenses (Toric IOL’s) to correct astigmatism. This can provide significant improvement in uncorrected vision and better visual quality.
Multifocal / Accommodating IOLs
Also available to our patients are Multifocal intraocular lenses which correct both distance and near vision. This decreases, and in many cases eliminates, the need for eyeglasses after surgery.
The Cataract Procedure
The patient is placed in a reclined position and relaxed with an intravenous sedative. Topical numbing drops are administered to the operative eye and a small instrument is placed between the eyelids so the patient does not have to worry about blinking during the procedure. A micro incision is made allowing the surgeon to use delicate ultrasonic technology to gently remove the cloudy cataract from your eye. This process is known as "phacoemulsification". The back portion of the lens capsule is left in place and carefully polished for clarity. The remaining lens capsule supports the small foldable intraocular lens which is inserted through the micro incision. The crystal clear intraocular lens is placed where your cataract was once located to provide you with improved vision. Because the micro incision is self sealing, no sutures are typically needed. Your surgery usually requires 15-20 minutes and there is no patching afterwards. State-of-the-art cataract surgery as provided by The Eye Clinic, Inc. surgeons typically requires no needles, no stitches, and no patches. The procedure is painless and usually patients are back to their normal activities the next day. |
|MadSci Network: Earth Sciences|
Quicksand is found in many parts of the US. Places that I am familiar with include New Jersey, the coast of North Carolina, and many areas in the Southeast (particularly Florida). In general, however, quicksand can occur anywhere where two conditions are satisfied: sand and a source of rising water. The sand can come from bedrock, alluvial (river) deposits, glacial deposits, or beaches. The water usually comes from springs or other types of groundwater flow. Basically, you need sufficient hydraulic pressure on the water to drive it up into the sandy deposit. Really flat areas tend not to have quicksand because there isn't sufficient pressure to force the water to form springs while steep areas tend to generate runoff that forms rivers. Quicksand can often be found in areas of rolling topography where the subsurface material can transmit water (limestone and dolostone do this well). As I'm sure you may have read or seen, quicksand can also form in swampy or wet areas, especially if that swamp is fed by springs. Another interesting way that quicksand or even "quicksoil" can form is during earthquakes. In the same way that rising water can agitate sand grains to cause the sand to lose it cohesion, vibrations in an earthquake can shake up wet sand or soil, causing it to become liquid. A particularly vivid example of this is thought to have occurred in Port Royal, Jamaica in 1692 (see the article by Kruszelnicki below). Buildings and people were sucked under the earth as the sandy spit where the town was located liquefied. If you want to know more about quicksand, I suggest you locate the following articles: "Quicksand" by Gerard Matthes in the June 1953 issue of Scientific American, page 97. "The Earth Did Swallow Them Up" by Karl Kruszelnicki in the December 21/28th issue of New Scientist pages 27-29.
Try the links in the MadSci Library for more information on Earth Sciences. |
10 Common Medical Terms Everyone Should Know
To Get a Medical Interpreter Certification, You Have To Understand The Medical World
Medical interpreters aren’t just good conversationalists, they are also extremely knowledgeable about medical concepts and terminology. They have to be in order to perform their job correctly.
One of the necessary qualifications to gain a medical interpreter certification, is to be familiar with, and know how to explain, common medical terms. Medical interpreter training programs will introduce these to you; however, in order to fully learn, it’s best to review and study them often.
The more you encounter certain terminology in everyday practice or work experience, the more comfortable you will be using it as a medical interpreter. However, a basic knowledge of common medical vocabulary will be a good base when beginning a medical interpreter certification program.
Below, we’ve listed 10 common medical terms to help you prepare to become a medical interpreter.
The best part? These concepts are beneficial to know, even if you are not an interpreter, as they can potentially help you in future medical situations!
10 Medical Terms for Interpreters and Everyday Use
- Integumentary System: an organ system to protect the body from outside harm that includes the skin, hair, nails, and glands.
- Lymphatic System: the body’s system that helps dispose of toxins. It includes the spleen, thymus, tonsils, lymphatic vessels, lymph nodes, and lymph fluids.
- Endocrine System: the body’s system responsible for hormones regulating metabolism, growth, development, reproduction and much more. It includes the pituitary gland, hormones, the adrenal glands, thyroid, pancreas, and gonads.
- Cardiovascular System: the system responsible for pumping blood throughout the body, it includes the heart, blood, and blood vessels.
- Nervous System: the body’s system responsible for relaying messages from the brain to other parts of the body. It includes the brain, ganglia, nerves, spinal cord, and sensory organs.
Body & Medical Terms
- Cartilage: a strong and flexible tissue that can be found throughout the body, and that protects the ends of bones and joints. The most common type is known as Hyaline Cartilage, located in the nose, windpipe and many joints.
- Abrasion: when top layers of the skin are rubbed away due to trauma, everyday functioning, or therapy.
- Ischemia: refers to an obstruction of the blood vessels that causes a decrease, or general lacking, of blood supply to organs or parts of the body.
- Magnetic Resonance Imaging (MRI): a test used to create extremely detailed pictures of the inside of the body – it uses a magnetic field and radio waves, and is conducted with patients lying inside a large, white, tube-like machine.
- Electroencephalogram (EEG): a test used to detect electrical activity in the brain – it is performed with metal discs attached to the scalp.
Working Towards a Medical Interpreter Certification
Of course, learning medical terms is only part of the work that needs to be done in order to obtain a medical interpreter certification. Interpreters must develop a certain set of skills to help them perform a faithful interpretation when in a medical setting.
This includes the proper way to relay information, as well as the ability to remain unbiased and emotionally uninvolved, regardless of what patients and doctors are saying.
Medical interpreter training programs are a good way to practice these skills, and prepare for medical interpreter certification exams. Language Connections offers a 7 week, medical interpreter training program focused on:
- Learning common medical industry vocabulary and concepts
- Studying anatomy and the human body
- Learning the ethics of medical interpreting
- Practicing interpreting in scenarios based off of real life experiences
All of our courses are taught by professional medical interpreters, with proven experience in the field. Classes are divided based on language groups, and are small in size, increasing the overall attention from the professor for each individual student.
Get the necessary, in person training in order to become a competent professional interpreter. Register now for one of our interpreter training programs: Medical Interpreter Training, Legal Interpreter Training or Community & Business Interpreter Training.
See the course schedule here: Schedule >>>
Contact us for more information: |
Cold Sore Research by Mayo Clinic
The scenario is all too familiar: You feel a tingling on your lip and a small, hard spot that you can't yet see. Sure enough, in a day or two, red blisters appear on your lip. It's another cold sore, probably happening at a bad time, and there's no way to hide it or make it go away quickly.
Cold sores — also called fever blisters — are quite different from canker sores, a condition people sometimes associate with cold sores.
Cold sores are common. Though you can't cure or prevent cold sores, you can take steps to reduce their frequency and to limit the duration of an occurrence.
Cold sore symptoms include:
Cold sores most commonly appear on your lips. Occasionally, they occur on your nostrils, chin or fingers. And, although it's unusual, they may occur inside your mouth — more often on your gums or hard palate, which is the roof of your mouth. Sores appearing on other soft tissues inside your mouth, such as the inside of the cheek or the undersurface of the tongue, may be canker sores but aren't usually cold sores.
Signs and symptoms may not start for as long as 20 days after exposure to the herpes simplex virus, and usually last seven to 10 days. The blisters form, break and ooze. Then a yellow crust forms and finally sloughs off to uncover pinkish skin that heals without a scar.
Certain strains of the herpes virus cause cold sores. Herpes simplex virus type 1 usually causes cold sores. Herpes simplex virus type 2 is usually responsible for genital herpes. However, either type of the virus can cause sores in the facial area or on the genitals. You get the first episode of herpes infection from another person who has an active lesion. Shared eating utensils, razors and towels may spread this infection.
Once you've had an episode of herpes infection, the virus lies dormant in the nerve cells in your skin and may emerge again as an active infection at or near the original site. You may experience an itch or heightened sensitivity at the site preceding each attack. Fever, menstruation, stress and exposure to the sun may trigger a recurrence.
Cold sores and canker sores
Cold sores are quite different from canker sores, which people sometimes associate with cold sores. Cold sores are caused by reactivation of the herpes simplex virus, and they're contagious. Canker sores, which aren't contagious, are ulcers that occur in the soft tissues inside your mouth, places where cold sores don't typically occur.
Cold sores generally clear up without treatment in seven to 10 days. Topical symptomatic treatments such as topical lidocaine or benzyl alcohol (Zilactin) may help relieve symptoms.
Using an antiviral medication may modestly shorten the duration of cold sores and decrease your pain, if started very early. If you experience very frequent bouts, your doctor may prescribe an antiviral medication as a cold sore treatment or to prevent cold sores.
You can take steps to guard against cold sores, to prevent spreading them to other parts of your body or to avoid passing them along to another person. Cold sore prevention involves the following: |
The rise in the number of overweight children in Western countries may be as much to do with their genes as their diet and exercise levels, according to a study that has identified a handful of genetic mutations linked with childhood obesity.
Scientists have discovered that children with the most severe kinds of obesity are more likely than other children to have one or more of four genetic variations in their DNA, which could influence such things as appetite and food metabolism.
The discovery is part of a wider search for the genes involved in increasing a person's risk of becoming overweight when exposed to an "obesogenic environment" of high-calorie food and inactivity which is known to affect some people more than others. The study looked at 1000 children with the most severe form of early-onset obesity, which is highly likely to result in obesity in adulthood.
Some of the 10-year-olds in the study weighed between 80kg and 100kg.
Some of the genetic variations revealed by the study were rare but others are relatively common, suggesting an interaction between genetics and environment, which could explain why certain children become obese while others do not, even when they share a similar upbringing.
The Cambridge University study, published in the journal Nature Genetics, identified four new genetic variants linked with severe childhood obesity. |
Teachers who are developing students’ capacity to "reason abstractly and quantitatively" help their learners understand the relationships between problem scenarios and mathematical representation, as well as how the symbols represent strategies for solution. A middle childhood teacher might ask her students to reflect on what each number in a fraction represents as parts of a whole. A different middle childhood teacher might ask his students to discuss different sample operational strategies for a patterning problem, evaluating which is the most efficient and accurate means of finding a solution. Visit the video excerpts below to view these teachers engaging their students in abstract and quantitative reasoning.
Mathematically proficient students make sense of quantities and their relationships in problem situations. They bring two complementary abilities to bear on problems involving quantitative relationships: the ability to decontextualize—to abstract a given situation and represent it symbolically and manipulate the representing symbols as if they have a life of their own, without necessarily attending to their referents—and the ability to contextualize, to pause as needed during the manipulation process in order to probe into the referents for the symbols involved. Quantitative reasoning entails habits of creating a coherent representation of the problem at hand; considering the units involved; attending to the meaning of quantities, not just how to compute them; and knowing and flexibly using different properties of operations and objects. |
December 9, 2012
Loose pieces of rock that are not connected to an outcrop.
If you’re in the field with a geologist, you’re very likely to hear the word “outcrop” and the phrase “in situ“. When describing, identifying, mapping, and understanding rocks, geologists like to see rocks in context. If rocks were alive, you might say that geologists like to observe rocks in their natural habitats. You might say that geologists like to observe where rocks live and who their neighbors are and how they interact with their neighbors. Of course, rocks aren’t alive, but geologists still find it very useful to observe rocks in situ, a Latin phrase that literally means “in position.” When rock is observed in situ, that means that it is attached to an outcrop, which is a place where bedrock or other “in position” rock is exposed at the Earth’s surface. Sometimes, outcrops are natural– they are places where weathering, erosion, faulting, and other natural processes have exposed hard rock above softer soil, sediment, alluvium, and colluvium. Often, outcrops are manmade. Geologist are found of observing rocks exposed at manmade outcrops such as roadcuts and quarry walls. Observing rocks in situ at outcrops allows geologists to gather much more information about the rocks than can be gleaned from a fragment of rock alone. By observing rocks in context, geologists can gather much information about the structure, stratigraphic position, size, degree of weathering, and many other aspects of a particular body of rock. Observing rocks in situ at an outcrop is particularly important for geological mapping. Only rocks observed at an outcrop can be confidently delineated on a geologic map.
When geologists encounter pieces of rock that are not found in situ at an outcrop, they refer to these rocks as “floats.” Floats are pieces of rock that have been removed and transported from their original outcrop. Sometimes, float rocks are found very close to outcrops. For example, weathering and erosion may create a pile of float rocks at the bottom of a hill below an outcrop. Often, geologists will first notice float rocks and then will look around– and often find– the outcrop from which the float rocks originated. Of course, geologists can never be 100% sure that a float rock originated from a particular outcrop, but they can be pretty certain if there is a similar rock in outcrop nearby the float. Other times, float rocks are found very far from their original outcrops. Water, ice, and even wind can transport rocks very far from their original outcrops. A well-known type of float rock is a glacial erratic, a rock which has been scraped up and transported by a glacier.
Float rocks can even be transported by anthropogenic activities. Many rocks are quarried and used for buildings, walls, roads, bridges, and other construction projects. Anthropogenic activities can move rocks far from their original outcrops. For example, in rural New Hampshire where I grew up many of the roads are gravel roads. The gravel that covers the roads is quarried and brought in by truck. I like to walk along the gravel roads near my parents’ house in New Hampshire and pick up interesting pieces of gravel. Sometimes, the gravel pieces contain spectacular garnets, micas, and other pretty minerals. I often find myself wondering about the geology of these gravel rocks. I can understand some things about the geology of these gravel float rocks, but to really understand these rocks I’d need to track down the quarry locations and go look at an outcrop or two.
Often, geologists are brought float rocks to identify. Curious non-geologists often pick up loose pieces of rock and bring them to geologists for identification. Commonly, people pick up dark-colored rocks and wonder if they are meteorites (most often, they’re not). Whenever I am brought a float rock (or am sent pictures of a float rock), one of the first questions I ask is, “Where did you find the rock?” I also often ask, “Were there any outcrops of the rock nearby? I mean, places where the rock was still attached to the Earth?” Often, the reply to these questions is, “No, I just picked up the rock. I don’t really remember where– somewhere in such and such place.” I do my best to identify float rocks when I can, but the truth is that there is only so much information that a geologist can gain from a float rock. Don’t get me wrong– geologists can still learn a great amount from float rocks. Nevertheless, geologists prefer to observe rocks in their natural habitats. |
NARRATOR: As any space scientist will tell you, we live in a golden age of solar research. Thanks to a series of new telescopes launched in the last two decades, we now have more – and better – ways to look at the Sun than ever before. The list of missions responsible for this new wave of solar research is long.
Here’s a quick introduction to three key ones that you’ll be using in our Sun Lab.
The first is the SOHO satellite, which launched in December, 1995. A collaboration between NASA and the European Space Agency, SOHO bristles with a dozen telescopes. With each one tuned to a distinct part of the electromagnetic spectrum, SOHO brought new regions of the Sun into focus for the first time.
For example, one of its telescopes uses relatively long wavelengths coming from the surface to indirectly model the Sun’s core.
But SOHO is probably best known for revolutionizing our understanding of space weather. Within its first year, SOHO scientists had learned how to track Coronal Mass Ejections and figure out which ones might be headed our way. Here, SOHO captures a period in 2011 when the Sun produced six of these storms in a single day.
The next groundbreaking mission, called STEREO, consists of two satellites. Since its launch in 2006, Stereo A has raced ahead of Earth in its solar orbit, while Stereo B has fallen further behind. As the telescopes have drifted farther apart, they’ve provided increasingly complementary views of the Sun, each day bringing more of it into view.
This led to a dramatic moment in February, 2011. For the first time in human history, we had a view of the entire Sun, not just the part facing Earth.
Because the Sun takes 27 days to rotate once, there used to be plenty of time for solar activity to develop without us noticing. Now, we can see active regions as soon as they form, leading to much better forecasting.
But the newest and most powerful telescope ever pointed at the Sun is the Solar Dynamics Observatory, or SDO for short. Think of it like SOHO on steroids. From its position near Earth, SDO also looks at many different wavelengths of solar radiation. But more than once per second, it delivers images ten times more detailed than HDTV.
Some estimates say this digital fire hose will transmit 50 times more science data than any mission in NASA history.
In our Sun Lab, you’ll see images from two of SDO’s three main instruments.
The first is AIA, a battery of four telescopes that can look at ten different wavelengths of light. These range from the Sun’s surface up to the highest reaches of the super-hot corona, the key to modeling space weather.
The second SDO instrument featured in the lab is called HMI. The “H” stands for “helioseismology,” because it uses sound waves moving through the surface to model changing magnetic fields generated in the convective zone below. By mapping these complex fields across the entire Sun, HMI helps scientists more quickly spot the conditions that can lead to solar storms.
Of course, these are not the only solar missions, and there are more in the planning stages – including one that will fly right into the Sun’s corona.
So keep looking – not at the Sun itself, but at the images coming from our solar telescopes. Not only is it safer for your eyes, but you’ll see a lot more. |
Quality of content: content is very accurate. The ABG Online Learning Activity used appropriate language and examples to explain arterial blood gases. Understanding ABG's can be difficult for many nursing students, this activity makes a difficult topic more easily understood. This activity can reinforce and review in class content but should not be used as a sole teaching device. This activity is appropriate for the junior/senior level nursing student.
The activity is set up on individual slides similar to a Power Point presentation. The slides are easy to read and "click" through. Basic ABG information is presented in the first half of the activity with case studies presented in the last half. The case studies are useful as in class review material. The instructor can pull up the case studies, allow the students time to read the study, and then ask the students the associated quiz questions.
This activity is well-designed to use in the all online classroom as well. This activity will be easy to use by all instructors/students who are familiar with "Power Point" formats. |
The world is a cooler, wetter place because of flowering plants, according to new climate simulation results published in the journal Proceedings of the Royal Society B. The effect is especially pronounced in the Amazon basin, where replacing flowering plants with non-flowering varieties would result in an 80 percent decrease in the area covered by ever-wet rainforest.
The simulations demonstrate the importance of flowering-plant physiology to climate regulation in ever-wet rainforest, regions where the dry season is short or non-existent, and where biodiversity is greatest.
"The vein density of leaves within the flowering plants is much, much higher than all other plants," said the study's lead author, C. Kevin Boyce, Associate Professor in Geophysical Sciences at the University of Chicago. "That actually matters physiologically for both taking in carbon dioxide from the atmosphere for photosynthesis and also the loss of water, which is transpiration. The two necessarily go together. You can't take in CO2 without losing water."
This higher vein density in the leaves means that flowering plants are highly efficient at transpiring water from the soil back into the sky, where it can return to Earth as rain.
"That whole recycling process is dependent upon transpiration, and transpiration would have been much, much lower in the absence of flowering plants," Boyce said. "We can know that because no leaves throughout the fossil record approach the vein densities seen in flowering plant leaves."
For most of biological history there were no flowering plantsknown scientifically as angiosperms. They evolved about 120 million years ago, during the Cretaceous Period, and took another 20 million years to become prevalent. Flowering species were latecomers to the world of vascular plants, a group that includes ferns, club mosses and confers. But angiosperms now enjoy a position of world domination among plants.
"They're basically everywhere and everything, unless you're talking about high altitudes and very high latitudes," Boyce said.
Dinosaurs walked the Earth when flowering plants evolved, and various studies have attempted to link the dinosaurs' extinction or at least their evolutionary paths to flowering plant evolution. "Those efforts are always very fuzzy, and none have gained much traction," Boyce said.
Boyce and Lee are, nevertheless, working toward simulating the climatic impact of flowering plant evolution in the prehistoric world. But simulating the Cretaceous Earth would be a complex undertaking because the planet was warmer, the continents sat in different alignments and carbon- dioxide concentrations were different.
"The world now is really very different from the world 120 million years ago," Boyce said.
Building the Supercomputer Simulation
So as a first step, Boyce and co-author with Jung-Eun Lee, Postdoctoral Scholar in Geophysical Sciences at UChicago, examined the role of flowering plants in the modern world. Lee, an atmospheric scientist, adapted the National Center for Atmospheric Research Community Climate Model for the study.
Driven by more than one million lines of code, the simulations computed air motion over the entire globe at a resolution of 300 square kilometers (approximately 116 square miles). Lee ran the simulations on a supercomputer at the National Energy Research Scientific Computing Center in Berkeley, Calif.
"The motion of air is dependent on temperature distribution, and the temperature distribution is dependent on how heat is distributed," Lee said. "Evapo-transpiration is very important to solve this equation. That's why we have plants in the model."
The simulations showed the importance of flowering plants to water recycling. Rain falls, plants drink it up and pass most of it out of their leaves and back into the sky.
In the simulations, replacing flowering plants with non-flowering plants in eastern North America reduced rainfall by up to 40 percent. The same replacement in the Amazon basin delayed onset of the monsoon from Oct. 26 to Jan. 10.
"Rainforest deforestation has long been shown to have a somewhat similar effect," Boyce said. Transpiration drops along with loss of rainforest, "and you actually lose rainfall because of it."
Studies in recent decades have suggested a link between the diversity of organisms of all types, flowering plants included, to the abundance or rainfall and the vastness of tropical forests. Flowering plants, it seems, foster and perpetuate their own diversity, and simultaneously bolster the diversity of animals and other plants generally. Indeed, multiple lineages of plants and animals flourished shortly after flowering plants began dominating tropical ecosystems.
The climate-altering physiology of flowering plants might partly explain this phenomenon, Boyce said. "There would have been rainforests before flowering plants existed, but they would have been much smaller," he said.
|Contact: Steve Koppes|
University of Chicago |
It is commonly believed that destroying trees will influence the climate of a region. But scientific evidence to support that deforestation and afforestation influence local climate - affecting temperature and rainfall - has only just started emerging.
A new study, led by Borbála Gálos of the Max Planck Institute for Meteorology, found that planting trees - or afforestation - in areas in Europe where there have previously been no trees can reduce the effect of climate change by cooling temperate regions. Using a computer-generated regional climate model, the study showed that afforestation in the northern part of central Europe and Ukraine could reduce temperatures by 0.3-0.5C and increase rainfall by 10 to 15 percent during summers by 2071-2090.
While the study was specific to the temperate regions, Gálos told IRIN that, in some regions, forests could be effectively used for climate-change mitigation. These studies gain more importance as drought-affected countries like Niger plan a massive afforestation campaign that will regenerate five million hectares of dry degraded land. Additionally, the UN Food and Agriculture Organization (FAO) recently published a policy guide to show that combining tree planting with crop or livestock production could not only stem climate change but also create incomes.
Senior scientist Gordon Bonan of the US-based National Center for Atmospheric Research, a leading authority on the influence of forests on climate change and a contributing author to Intergovernmental Panel on Climate Change (IPCC) assessments, spoke to IRIN about the status of the research in this area.
Q: What is the scientific history of linking climate and forests? There are studies that show conserving our existing forest cover is imperative for keeping carbon emissions low, but the impact of reforestation/afforestation on local climate has not been studied at length or at a regional scale, right?
A: Scientific interest in this problem goes back several centuries, with the European settlement of North America, India and Australia and subsequent widespread deforestation during land clearing. There was a common view that deforestation was altering climate - primarily temperature and rainfall - but the scientific tools to study this were not available until the past 30 years or so, with global and regional climate models and satellite observations.
The most prominent example has been studies of tropical deforestation in the Amazon. Most climate model simulations show that large-scale conversion of tropical rainforest to pastureland creates a warmer, drier climate. This model result is generally accepted. However, the observational evidence for this is lacking, primarily because the model simulations cut down all the rainforest while the observational record is based on much less extensive deforestation. So one cannot compare the observations with the models. Model simulations of less extensive deforestation show a smaller effect on climate.
There was a very interesting observational study [Spracklen et al 2012 Nature 489:282-286] that analysed satellite remote-sensing data of tropical precipitation and vegetation. They found that for more than 60 percent of the tropical land surface, air that has passed over extensive vegetation in the preceding few days produces at least twice as much rain as air that has passed over little vegetation.
Another region of extensive research is the Sahel of northern Africa: Most modelling studies show a warmer, drier climate because of loss of vegetation. Northern Africa was much wetter and supported lush vegetation 6,000 years ago. Studies find that the more productive vegetation at that time led to enhanced rainfall.
The newer research is focused on temperate latitudes in North America and Europe, to investigate how historical deforestation altered climate and how proposed reforestation or afforestation might affect climate.
This work is still uncertain, and one can find modelling studies showing cooling because of deforestation or warming because of deforestation. The response depends on the simulated change in surface albedo [the fraction of solar energy reflected back from the Earth]: Do forests reflect less solar radiation than cropland or grassland? If so, the greater absorption of solar radiation by forests heats the land surface. [Response also depends on] the hydrologic cycle: Do forests evaporate more water than cropland or grassland? If so, greater evaporation by forests cools the surface.
Q: When considering afforestation to mitigate climate change, what should one take into account? The growth period of the trees, the height, the area covered? What kind of difference it would make to temperatures?
A: Deforestation, afforestation, reforestation and other land-use practices are regional in scale. The affect that they have on temperature and precipitation is seen in regional climate. Greenhouse gas warming is a global phenomenon, well seen in the global temperature record and also in regional temperature records. When one compares the simulated regional effects of land use on climate with the simulated regional greenhouse gas effect, one can find that the land use signal is of similar magnitude.
There is much scientific uncertainty in how afforestation affects climate - again related to albedo and evaporation. Different types of trees or other vegetation do differ in growth rates, height, etc., all of which affect albedo and evaporation.
A big research question is the net effect of afforestation on climate. Forest ecosystems store carbon [reducing greenhouse gas warming], but do they warm or cool climate because of changes in albedo and evaporation? And how do these compare with the climate [influences] from carbon storage? These are all very active research areas. |
Definition - What does Welding Respirator mean?
A welding respirator is a piece of protective equipment that protects welders from fumes. Various designs are available including half mask respirators, powered air purifying respirators and supplied air respirators are used depending on the level of fumes in the workplace. OSHA requires that workers not be exposed to hexavalent chromium and zinc fumes, pollutants produced in the welding process, beyond permissible exposure limits (PELs). Particulate pollution and other fumes can also present hazards. Ways of eliminating the danger through such factors material substitution and improved ventilation are considered first. If it is not possible to reduce exposure through these measures, personal protective equipment is used. |
Our planet is an amazing place! It’s full of plants, animals, geographic anomalies and thousands of other variables that combine to create a perfect space for life.
Even with all this amazing uniqueness, people have begun to notice patterns. Many times, these patterns are helpful when predicting the weather. You can look to clouds, rainbows, colors, animals or even your salt shaker for help with predicting the weather.
While high-pressure and low-pressure systems can create storms, local geography can have a major effect too. Most weather conditions in the United States move west to east. Keep an eye on what is happening to the west of you.
Large bodies of water can have an effect on the weather. Oceans and lakes can keep the temperature more constant. You’ll notice that coastline will be hotter and cool down as you move inland. Many times, lakes can hold in cooler temperatures and you’ll notice cooler air as you get closer to the lakes.
Hills and mountains can also change the weather patterns. For example, the Sierra Nevadas stop most moist clouds from reaching the east side of the slope. Most of the moisture falls on the western side.
While you’re imagining what those clouds in the sky resemble, take a moment to notice what type of clouds you’re looking at.
If you see long streamer-like clouds (cirrus clouds) or scale clouds (altocumulus clouds) that typically means that a storm is on its way. Expect a storm within 36 hours.
If you see a lot of cloud cover at night, it typically means that you’ll have a warm night. Heat radiation will be forced to stay underneath the clouds and warm the atmosphere during the night.
If clouds are going two different directions, that typically means that there is a storm coming your way. You’ll notice that one layer of clouds is going right and another layer is going left. This is typical before a hail storm.
Most storms in the United States travel from west to east. Therefore, a rainbow in the west means that moisture is coming your way. If a rainbow is in the east, it typically means that a storm is leaving your area and you can expect some sun.
Remember the adage, “Rainbow in the morning, need for a warning.”
Speaking of adages, most of us have heard “Red sky at night, sailors delight; Red sky in the morning, sailor’s warning.”
If you see a red sky during sunset (when the sun is in the west), that means that there is a high-pressure system stirring up dust particles in that area. Since prevailing front movements typically move from west to east, that means that the dry air is moving your direction.
If the sky is red in the morning (when the sun is in the east), it means that the high-pressure system has already moved past you. This typically means that a low-pressure system is following close behind and that typically means a storm is coming.
Look for rings around the moon. These rings are caused from light shining through cirrostratus clouds that are typical of warm fronts. These clouds mean that rain is probable within three days.
If the moon is a pale or red hue, it means there is a lot of dust in the air. Sometimes these colors mean there is a lot of pollution in the air too. However, if the moon looks more sharp and brighter than normal, it typically means that there is a low-pressure system moving through the area that has cleared out the dust and dirt. Low-pressure systems are also associated with rain.
Humid air typically means that there is a heavy rain on its way. You can notice humidity from people’s hair. People’s hair will typically curl up or get frizzy.
Pine cones are a great way to determine humidity too. If a pine cone’s scales remain closed it means that the humidity is high. If they are open, the air is dry.
Wood usually swells when it gets humid too. You’ll notice that wood doors will get a little tighter and won’t open as easily.
Scents are typically stronger in moist air. You’ll notice a compost smell that plants release in the atmosphere. Swamps typically release gases just before a storm too. If smells are stronger, it typically means that there is a low-pressure system in place and that leads to rainy weather.
You’ll notice that birds won’t go out as much. If the weather is good, you’ll see birds flying high in the sky. If the pressure is dropping, a lot of birds will be on power lines. If you live on the coast, you’ll notice that seagulls tend to take refuge right before a storm.
Cows are another animal to look to. Cows will typically lay down before a thunderstorm. They will also gather close together to protect themselves. If you notice the cows doing this, a storm might be on its way.
You can even look to insects for help. Ants will usually build their hills a little steeper just before it rains. This helps against the corroding effects of the rain water.
Trees. Deciduous trees often show the undersides of their leaves when there are unusual winds. This is supposedly because they grow their leaves to face right-side up during typical winds. If you see the wind blowing the underside of deciduous tree’s leaves, you know that something is different.
You can also look at the leaves of an oak or maple tree. Their leaves tend to curl when there is high humidity.
Campfires. Watch the smoke from your campfire. If the smoke swirling or is being pushed down, it means there is a low-pressure system in place. If it rises steadily, you should be fine.
Dew. In the morning, check to see if there is dew. If the grass is dry, it typically means that there are clouds and strong breezes. If there is dew on the leaves in the morning, it probably won’t rain that day.
So we’ve covered a few points here, but what do you think? Let us know your superb outdoor tricks to predicting the weather! |
More than nine in ten cancer-related deaths occur because of metastasis, the spread of cancer cells from a primary tumor to other parts of the body. While primary tumors can often be treated with radiation or surgery, the spread of cancer throughout the body limits treatment options. This situation could change if work done by Michael King and his colleagues at Cornell University delivers on its promises, as he has developed a way of hunting and killing metastatic cancer cells.
When diagnosed with cancer, the best news can be that the tumor is small and restricted to one area. Many treatments, including non-selective ones such as radiation therapy, can be used to get rid of such tumors. But if a tumor remains untreated for too long, it starts to spread. It may do so by invading nearby healthy tissue or by entering the bloodstream. At that point, a doctor’s job becomes much more difficult.
Cancer is the unrestricted growth of normal cells, which occurs because mutations in a normal cell cause it to bypass a key mechanism called apoptosis (or programmed cell death) that the body uses to clear old cells. However, since the 1990s, researchers have been studying a protein called TRAIL, which on binding to the cell can reactivate apoptosis. But so far, using TRAIL as a treatment of metastatic cancer hasn’t worked, because cancer cells suppress TRAIL receptors.
When attempting to develop a treatment for metastases, King faced two problems: targeting moving cancer cells and ensuring cell death could be activated once they were located. To handle both issues, he built fat-based nanoparticles that were one thousand times smaller than a human hair and attached two proteins to them. One is E-selectin, which selectively binds to white blood cells, and the other is TRAIL.
He chose to stick the nanoparticles to white blood cells because it would keep the body from excreting them easily. This means the nanoparticles, made from fat molecules, remain in the blood longer and thus have a greater chance of bumping into freely moving cancer cells.
There is an added advantage. Red blood cells tend to travel in the center of a blood vessel, and white blood cells stick to the edges. This is because red blood cells are lower density and can be easily deformed to slide around obstacles. Cancer cells have a similar density to white blood cells and remain close to the walls, too. As a result, these nanoparticles are more likely to bump into cancer cells and bind their TRAIL receptors.
King, with help from Chris Schaffer, also at Cornell University, tested these nanoparticles in mice. They first injected healthy mice with cancer cells; after a 30-minute delay, they injected the nanoparticles. These treated mice developed far fewer cancers compared to a control group that did not receive the nanoparticles.
“Previous attempts have not succeeded, probably because they couldn’t get the response that was needed to reactivate apoptosis. With multiple TRAIL molecules attached on the nanoparticle, we are able to achieve this,” Schaffer said.
The work has been published in the Proceedings of the National Academy of Sciences. While these are exciting results, the research is at an early stage. Schaffer said that the next step would be to test mice that already have a primary tumor.
“While this is an exciting and novel strategy,” according to Sue Eccles at London’s Institute of Cancer Research, “it would be important to show that cancer cells already resident in distant organs (the usual clinical reality) could be accessed and destroyed by this approach. Preventing cancer cells from getting out of the blood in the first place may only have limited clinical utility.”
But there is hope for cancers that spend a lot of time in blood circulation, such as blood, bone marrow, and lymph node cancers. As Schaffer said, any attempt to control the spreading of cancer is bound to help. It remains one of the most exciting areas of research and future cancer treatment.
This article was originally published at The Conversation. |
In this activity, you will learn the difference between one-way and two-way communication by creating a cooperative drawing.
You will need Adobe Reader to open the PDF file. On the right under the page image, click on the link that says "In English." Print pages 17-18.
Read the definitions in the "Definition" section. Then, follow the directions in the "Teaching Strategies" section. In a homeschool setting, work with a parent, sibling, or friend. Use paper and pencil rather than a whiteboard, chalkboard, or overhead.
After completing this exercise, discuss whether the article about the zoo and the cartoon are one-way or two-way methods of communicating.
You will need the following supplies: |
Stepping towards two more Network Layer Protocol. The ICMP and the IPsec protocol.
Internet Control Message Protocol(ICMP)
In ICMP message, there is an ICMP type and its corresponding code. Lets have a look at these types and codes.
ICMP Type Code Description
0 0 Echo Reply
3 0 Destination Network Unreachable
3 1 Destination Host Unreachable
3 2 Destination Protocol Unreachable
3 3 Destination Port Unreachable
3 6 Destination Network Unknown
3 7 Destination Host Unknown
4 0 Source Quench (Congestion Control)
8 0 Echo Request
9 0 Router Advertisement
10 0 Router Discovery
11 0 TTL Expired
12 0 Bad IP header
Lets have an Example. You must be familiar with the ping operation. In ping operation , the sender sends an ICMP type 8 code 0 message to the receiving host. The destination on receiving this message replies with a ICMP type 0 code 0 message.
There is another ICMP type 4 code 0 message i.e. Quench message. This ICMP message was originally developed to prevent congestion control. But now because TCP does this job without the ICMP source quench message.
To take the benefits of IPsec, the whole Internet stack need not be transformed into IPsec protocol. Just the two hosts that want to communicate securely over IP, then only IPsec needs to be available on these 2 communicating hosts. Other routers and end systems keep continuing over IPv4 only.
For Example: Suppose there is a company selling computer products and having sales office in 6 countries and its employees travelling in different cities around the globe. Every employee has a company provided laptop. Now if the employees want to share the confidential information among themselves such as pricing information and product information. Then what should be done to exchange this information securely ?? Yes, What you are thinking is right. The company will install the IPsec version in all its employees’ laptop and in the server at the company headquarters. In this way, all the employees can communicate securely.
Services of IPsec:
1. Cryptographic Agreement:
The sending and receiving host agreed on the cryptographic algorithms and keys.
2. Data Integrity:
The communicating hosts are ensured that the data is not modified during its transmission through different routers and intermediate switches.
3. Encryption / Decryption:
The Data is encrypted using a certain algorithm on which the sender and receiver agreed. Then the data is only decrypted by the receiving IPsec host.
IPsec enables the communicating hosts to verify each other’s identity in order to provide data transmission between trusted hosts only.
***** When two end users communicate over IPsec, all the TCP or UDP packets are encrypted and authenticated. Thus it provides a layer of security to communicating hosts between all the network applications. ******
Modes of IPsec
There are basically 2 modes of IPsec. These are:
1. Transport Mode:
In transport mode, only the data or the payload of IPsec datagram is encrypted and is encapsulated in another IP datagram.
2. Tunnel Mode:
In this mode, the whole datagram is encapsulated and is encapsulated in another IP datagram. This is a bit complicated process, but for security, this has to be done.
This is all from us on ICMP and IPsec. We hope you enjoyed it.
This information can be useful for someone in your circle. Share with them on Facebook, Google+, Gmail etc. |
Iron is a necessary mineral we need in order for our body to produce red blood cells. If you don’t get enough iron, you can’t produce enough red blood cells to keep you healthy. This is called iron deficiency anemia.
Most people will consume enough iron in their diet to meet the recommended daily allowance (RDA), however some people may need to take additional amounts for their specific needs. Things like bleeding problems, burns, hemodialysis, intestinal disease, stomach problems, stomach removal, and certain medications may increase one’s need for iron beyond the typical amount.
Symptoms that may present with low iron include fatigue/ tiredness, shortness of breath, decreased physical performance, and learning problems in either children or adults. It is also believed too low of iron could increase your chance of getting an infection.
Iron can be consumed in two ways: 1. Heme iron and 2. Nonheme iron. The best dietary source of absorbable (heme) iron is through lean red meat. Chicken, turkey, and fish are also sources of iron, but less so than red meed. Poorly absorbed (nonheme) iron can be found in cereals, beans, and some vegetables. |
Montessori’s approach is based on the simple concept of tailoring education to match children’s natural tendencies instead of imposing arbitrary rules from the adult world. Montessori observed that children are extraordinarily curious and experiential, and she sought to respect these attributes instead of condemning them.
Montessori realized that children learn best through discovery, personal intentionality, and experimentation. These observations led her to develop materials that concretely illustrated abstract principles of mathematics, language, and science. Her interactive materials were the first ever developed for children and contained a built-in control of error to allow the child to self-correct rather than remain dependent on the adult.
The Pre-Primary day includes independent and small group activities that develop gross and fine motor skills, cognitive, social and language skills. Art, music and free play round out their busy day. Since children in this age group love to be “on the go” ample opportunity is provided to explore the outdoor environment, engage in potdoor activities or just run in the sun for a part of every day. |
Sustainable road development can balance economic growth with environmental protection and social inclusion.
More than 400 million people in Asia and the Pacific lack all-weather access to markets and essential services. Moreover, the region’s emerging economies have some of the lowest per capita road lengths in the world. Road transport that is safe, inclusive, low-carbon, and climate-resilient is crucial for reducing poverty and promoting green, inclusive, and sustainable growth.
To foster development and ensure accessibility for all, these countries need to invest substantially in road and other transport infrastructure and enhance transport efficiency. The Asian Transport Outlook estimates that the region will need to build 8 million kilometers of roads by 2030. Furthermore, the costs of maintaining and operating road networks in the region are expected to surpass the costs of new road construction by 2030.
However, roads also have significant environmental and social impacts that need to be addressed.
For instance, road transport is a major source (18%) of global energy-related CO2 emissions and has been leading the increase in carbon emissions in recent decades. Across the world, growth in transport sector emissions is highest in Asia, driven largely by growing demand for road passenger and freight transport.
Road development can also alter the natural hydrology of landscapes, causing springs to dry up in mountain areas, waterlogging to increase in coastal areas, and floods to be amplified as roads disrupt the natural drainage. Roads are also estimated to increase erosion in catchments by 12-40%, which affects soil fertility and water quality.
Road development can fragment and degrade habitats of wildlife, plants, and insects. Roads are among the top three causes of animal mortality in many countries, and also facilitate the spread of invasive species and diseases.
Road construction and operation also have impacts on the well-being of people. For example, dust and vehicle emissions can have major impacts on air quality. The Asian Transport Outlook estimates that 76% of global deaths related to breathing particulate matter happen in Asia and the Pacific. Roads also contribute to urban heat islands, which raise temperatures in cities and affect human health and comfort.
Building and rehabilitating roads consumes a large amount of construction materials, which account for 30-40% of all the materials used in construction projects. The demand for these materials has grown much faster in Asia than in the rest of the world. In the last decade, Asia’s demand has increased by 64%, while the global increase was only 17%. This creates challenges for the sustainability of natural resources and the management of waste and pollution.
The huge impacts of road development require a new approach to how roads are planned, constructed, and managed. Roads should not only serve transportation needs, but also support other objectives such as enabling climate and disaster resilience, improving quality of life, promoting sustainable land and water use, reducing disaster risk, strengthening ecosystems, minimizing pollution, sourcing materials responsibly, and enhancing inclusive economies.
To balance economic, social, and environmental objectives and make roads in Asia and the Pacific greener, a holistic and integrated approach to road planning, development, and maintenance and operations is needed. Green Roads can leverage high demand for road investments to create a win-win situation for both development and environment.
To achieve this, the following actions are needed:
- Protect biodiversity and ecosystems by avoiding or minimizing road impacts on natural habitats and wildlife and creating road features that allow animal crossings and biodiversity conservation, such as verges, underpasses, or bridges.
- Enhance climate and disaster resilience by designing roads and managing road networks that can withstand extreme weather events, minimize accumulation of heat, and reduce disaster risks.
- Contribute to the transition to low-carbon economies by adding and provisioning for more charging stations for electric vehicles to support e-mobility, promoting modal shifts to less carbon-intensive travel modes such as walking and cycling, and using recycled and innovative materials that lower carbon emissions and energy consumption to minimize the carbon footprint of road construction.
- Use sustainable procurement and circular economy practices including selecting lower carbon construction materials and equipment, reuse and recycle waste, and prolong the life span of roads.
- Improve water management by using roads to collect, store, and distribute water for irrigation and other needs.
- Enhance quality of life by minimizing road pollution and improving access to markets and social services.
Green roads can enhance both development and environment by minimizing environmental impacts of roads whilst fostering sustainable growth. |
Most of the stars in the Milky Way Galaxy move at speeds that depend on how far they are from the galaxy’s heart. But some stars strike out on their own. They can zip along many times faster than the others — and in all different directions.
Most of these stars move at up a few hundred thousand miles per hour relative to the stars around them. The fastest tops out at about four million miles per hour. Some of these stars are moving fast enough to leave the Milky Way behind — and streak into the space between galaxies.
Astronomers have identified at least three ways to give a star such a strong kick.
One is an encounter with the supermassive black hole at the center of the Milky Way. It’s four million times the mass of the Sun, with a powerful gravitational pull. As a binary star system approaches it, the black hole may grab one of the stars and give the other a boost away from it.
A second way is a bit of gravitational acrobatics among three or more stars. That can juggle the orbits of some of the stars, but shove one of them away from the others.
And the third way involves a stellar explosion. When a massive star explodes as a supernova, it loses most of its mass. That means its gravity isn’t as strong. So a companion star could zip off in a straight line — away from the explosion. And the shockwave from the explosion could add to the kick — perhaps punting the companion out of the galaxy.
Script by Damond Benningfield |
Eating the Rainbow! Tackling fussy eating with Toddlers!
Tackling Fussy Eaters
It can be extremely difficult getting your little ones (especially demanding toddlers) to eat a variety of fruit and vegetables each day. Kids often eat foods for taste and satisfaction and not the health benefits. Whilst it’s important to educate your children on the importance of nourishing their body with nutritious foods, be sure to incorporate enjoyable techniques to get them exploring with their taste buds!
A great way to introduce a variety of fruit and vegetables to your children is to entice them with the “colours of the rainbow”. Rainbows are colourful, appealing and create a sense of happiness within a person, so why not incorporate it into your toddlers meal time?
A simple, yet effective activity you can include in your daily routine at home, is a rainbow food chart. Create a table with the colours of the rainbow along one side, and the days of the week along the top. When your child has eaten a fruit or vegetable of a particular colour on a certain day, get your child to place a sticker in the corresponding box. This allows your child to see their progress across a week and can give them a sense of independence, satisfaction and ownership in their development.
Download our free Eat A Rainbow sheet here!
Some other helpful hints to get your toddler eating their fruit and vegetables include:
Lead by example: if your child observes you eating a variety of fruit and veggies, this can be enough for them to do the same.
Involve your child in food planning and prepping: allow your child to choose fruit and veggies to include in your meals, allow them to wash them and get them to recognise the differences among the variety of the fruit and vegetables that you’re working with. With vegetables, acknowledge your child’s preference on the way they are cooked.
Explore our shop for tools you can use to help involve your little ones in the kitchen:
Positive reinforcement: be aware of the way you talk to your child in regards to eating fruit and veggies. Try to educate them on the positives of eating each piece, not so much the negatives.
For example: Replace “If you don’t eat this you are going to catch a cold” with “Green fruit and vegetables are awesome for keeping you healthy and strong to fight off any colds”
You can try incorporating play games for encouragement and education such as Fruit and Vegetable Puzzles and Flash Cards
Presentation: serve up an assortment of fruit and/or vegetables in an appealing way for your child. For example, use shape cutters to cut the fruit and vegetables (where possible) into different shapes or create a face or object on their plate with the fruit and veggies. See our range of fun kids plates and shape cutters.
Be persistent and resilient! Improvements will happen over time. Small changes can go a long way!
Healthy choices and an array of opportunities to taste, feel and play with food can lead to easier, more successful meal times and a healthier lifestyle. |
Effective Strategies for Teaching Reading
Reading is necessary for more than just enjoying books. Being able to read is necessary to learn academic content in school, function in everyday life and eventually enter the adult world of work. That said, only 35 percent of fourth-graders and 36 percent of eighth-graders could read at a proficient or above level in 2013, according to the National Assessment of Education Progress. Teaching students to read isn't a one-technique process. There are a variety of strategies that effectively teach print awareness, phonics, fluency and more.
1 Using Phonics to Teach
Phonics uses the alphabetic principle to help children learn how to read, according to the website Reading Rockets. Letters and combined letter patterns represent sounds in spoken language, and the phonics strategy encourages beginning readers to recognize the relationships between letters in print and the sounds they make. For example, "th" makes the same first sound in "think" or "thank." Students start phonics instruction by learning individual sounds that the written letters make. This provides a basis for putting sounds together and reading words.
2 Building Language Fluency
Fluency refers to the ability to read in a continuous stream, without stopping or stumbling. Fluent readers are able to put letters together into words while understanding the content behind them. One strategy that is effective when working with young children is to model fluent reading, according to the Scholastic Teachers website. For example, select books from a few different age-appropriate genres, such as fantasy stories, informational texts or poetry. Instruct the children to watch and listen to you as they follow along with your read-aloud. Ask them to pay attention to how you read the words and how easy it is for them to follow what you're saying. Another technique that builds fluency is re-reading. By going over sentences and passages of text, students are able to practice forming the words and gain confidence in their reading skills.
3 Using Existing Knowledge for Comprehension
Understanding the meaning behind the words is necessary in order to read, and using existing knowledge to figure out unknown vocabulary or content can help students to build comprehension skills. For example, if a sentence contains a vocabulary word the student doesn't recognize, he can use his knowledge of the rest of the words to piece together the meaning. Predicting based on prior knowledge also works for comprehending styles of texts. Fill-in-the-blank sentences provide a way to help beginning students use this type of strategy. They can read the sentence and judge the meaning, then add the missing word based on what already is there. If the student knows the conventions of a specific style, he may have the ability to comprehend unknown text.
4 Previewing the Text First
Teaching students to preview the text first can help them more effectively make meaning of the words and improve comprehension. Instead of diving deep into the text, previewing involves looking at section headings, titles, pictures or captions. For example, students can page through their first chapter book and preview the title and chapter names. This gives them an idea what the book is about and what to expect when they read it.
- 1 National Center for Learning Disabilities: The NICHD Research Program in Reading Development, Disorders and Instruction
- 2 The Nation's Report Card: What Level of Knowledge and Skills Have the Nation's Students Achieved?
- 3 Reading Rockets: Classroom Strategies
- 4 Reading Rocket: Phonics
- 5 Scholastic Teachers: 5 Surefire Strateges for Developing Reading Fluency
- 6 National Capital Language Resource Center: Strategies for Developing Reading Skills |
Adults and children living with diabetes, especially those treated with insulin, are at risk for experiencing hypoglycemia, or low blood sugar. Hypoglycemia is a state where blood glucose (blood sugar) levels fall below a normal range, usually less than 70 milligrams per deciliter (mg/dL). You may also hear it referred to as insulin shock or insulin reaction. It is important to speak with your healthcare provider to determine individual blood sugar targets.
Common, milder symptoms of hypoglycemia are headaches, shakiness, dizziness, weakness, anxiety and irritability. More severe symptoms can be life-threatening and include confusion, slurred speech, loss of consciousness and seizures. It is important to know the signs and be prepared for lows in order to avoid progressing to the severe symptoms of hypoglycemia. If you are experiencing symptoms of hypoglycemia, you should check your blood sugar level. If you are unable to check your blood sugar, treat yourself as if you have hypoglycemia and test as soon as possible.
Hypoglycemia can happen very quickly and each person’s reaction is different, so it is important to learn to recognize your own signs and symptoms. It is also important to know how to correct it and always be prepared to correct it. If you have a child with diabetes, collaboration with family, school personnel, daycare providers and healthcare providers is imperative for knowing how to recognize and treat hypoglycemia.
Symptoms of hypoglycemia may be mild, such as loss of energy and shakiness, but if not corrected quickly, blood sugar levels may continue to plummet and lead to more severe symptoms such as seizures and loss of consciousness.
Most people find that they have their unique set of signs and symptoms to alert them to low blood sugar. It is important to become familiar with your signs and symptoms so that you know when to treat hypoglycemia. Keep in mind that your unique symptoms may change over time as well. The list below identifies signs and symptoms of low blood sugar, beginning with mild and progressing to severe symptoms.
Sometimes it may not be possible for a child to communicate that he or she is experiencing symptoms of low blood sugar. If you care for a child with diabetes, take time to learn the signs of hypoglycemia and check the child’s blood sugar levels if you are suspicious.
Signs and symptoms:
Low blood sugar can occur for several reasons, including:
It is not always easy to identify why your blood sugar went too low. What is important is correcting it right away. Later, take some time to reflect on what may have caused it. This may help lower the chance of low blood sugar from happening often. And do not forget, always be prepared!
Not everyone with diabetes is at the same level of risk for low blood sugar. People with type 1 diabetes and those being treated with insulin or other diabetes medications may have a higher risk of experiencing hypoglycemia.
|Type 1 Diabetes||Type 2 Diabetes|
|May experience, on average, 43 symptomatic hypoglycemic episodes annually. Severe hypoglycemic episodes may occur twice annually.||If using insulin, may experience an average of 16 symptomatic hypoglycemia episodes annually. Severe hypoglycemic episodes may occur once in five years.|
Learn the actions of your medications from your healthcare provider and know your risks for low blood sugar episodes
Perlmuter LC, Flanagan BP, Shah PH, Singh SP. Glycemic control and hypoglycemia: is the loser the winner? Diabetes Care 2008;31:2072-2076.
How to Correct a Low Blood Sugar Episode
Step 1: Check Your Blood Sugar — If you feel symptoms of low blood sugar, check your blood sugar. If your blood sugar is below 70 mg/dL or at a level designated by your healthcare provider to be too low, correct it. If you are not able to check your blood sugar, but you feel that it is too low, correct it.
Step 2: Consume 15 to 20 grams* of glucose or simple carbohydrates — Pure glucose is the preferred treatment for low blood sugar. Convenient sources of 15 to 20 grams of pure glucose are glucose tablets, liquids and gels.
If you do not have one of these sources of pure glucose, correct low blood sugar by consuming 15 to 20 grams* of a food or beverage containing carbohydrate such as juice, regular soda, skim milk or hard candy. Avoid consuming snacks that are high in fat because the fat may delay carbohydrate absorption. Also, do not consume too much carbohydrate in order to avoid excess calories and rebound hypoglycemia.
Step 3: Wait 15 minutes and then check your blood sugar again — If your blood sugar has risen to within your target range, it is likely that you can resume your activities. If your blood sugar is still too low, consume another 15 to 20 grams* of pure glucose or simple carbohydrate and check again in 15 minutes.
Step 4: Consider eating a snack if your next meal is one to two hours away — If your next meal is more than an hour away you may need to eat a snack to help keep your blood sugar stable until your next meal.
It is important to talk to your healthcare provider about what is best for you when it comes to correcting low blood sugar. It is best to plan ahead and always be prepared!
*Younger children with Type 1 diabetes may need less glucose. Discuss with a healthcare professional.
Severe hypoglycemia, or very low blood sugar, is dangerous and potentially life-threatening. Symptoms of severe low blood sugar include seizures and loss of consciousness. These conditions hinder the ability to consume a source of glucose or carbohydrate, so the person must rely upon someone else to help correct the low blood sugar and regain consciousness. Therefore, it is important to inform trusted family, friends and co-workers about hypoglycemia and how they can assist you during a severe low.
If you have a severe low, you will want to have a glucagon kit on hand. This kit provides an injection of glucagon, a hormone normally made in the pancreas that raises blood sugar quickly. You will need a prescription for a glucagon kit, so consult your healthcare provider about whether or not you should have one.
A glucagon kit contains a syringe filled with a liquid, a vial of glucagon powder and step-by-step instructions on how to prepare the injection. Learn how to use the kit. Your healthcare provider or diabetes educator may have a practice kit to show you. Also, consider having a dress rehearsal to teach your family, friends and co-workers how to use the kit. Remember, it is possible that you may not be able to use the kit yourself if you are have severe low blood sugar.
Store your glucagon kit where you can easily get to it and let others know where it is. Remember that glucagon kits have an expiration date. Check your kit on a regular basis to make sure that it has not expired.
Finally, make sure that your trusted family, friends and co-workers know how to contact your healthcare provider and let everyone you spend time with know how to recognize the symptoms of severe low blood sugar. Also, let them know that they might have to seek immediate medical assistance if they are not comfortable using the glucagon kit or they feel you need emergency help.
After having diabetes for many years, it is possible to experience what is called hypoglycemia unawareness. This is when you have low blood sugar but do not experience any early signs and symptoms. By the time you are aware that you have low blood sugar, you may be in a state of severe hypoglycemia.
It is suggested that people who have hypoglycemia unawareness check their blood glucose level more frequently so they become more aware of when low blood sugar occurs. Also, your healthcare provider may advise you to make adjustments in your diabetes management plan. If you experience hypoglycemia unawareness or severe hypoglycemia you should discuss this with your healthcare provider.
Hypoglycemia Unawareness Tip: Let others know that they should never try to give you liquid or solid food if you are having a seizure or are unconscious due to low blood sugar because this could cause you to choke. After using a glucagon kit, they should immediately seek medical assistance.
Low Blood Sugar Do's
Low Blood Sugar Don'ts
Products formulated with pure glucose are preferred for the treatment of hypoglycemia. It has been demonstrated that glucose tablets raise blood sugar more quickly, without subsequent hyperglycemia, compared to other blood glucose-raising foods. Pure glucose products offer many advantages that make them more convenient for you to always be prepared to correct low blood sugar.
Tablets, liquids and gels specially formulated with glucose:
Foods for Low Blood Sugar
Fruit juice, milk and hard candies are foods that can help raise low blood sugar. However, the carbohydrate in these foods is both glucose and fructose. Glucose raises blood sugar more quickly than fructose, so pure glucose may help correct low blood sugar faster than a food that contains a mix of glucose and fructose.
While products formulated with pure glucose are preferred for correcting low blood sugar, there are times when you may not have a pure glucose product with you when you need it. The following foods are also good choices to raise your blood sugar level:
Remember: It is recommended that you check and correct your blood sugar as soon as you begin to feel symptoms of hypoglycemia. If you are unable to check your blood sugar level, correct it as if it is low. Try not to over-correct low blood sugar with too much glucose or food, as this could cause further complications. Consult your healthcare professional to determine what the best course of action is for you.
Brodows RG, Williams C, Amatruda JM. Treatment of insulin reactions in diabetics. JAMA 1984;252:3378-3381.
American Diabetes Association. Glycemic targets. Sec. 6. In Standards of Medical Care in Diabetes—2015. Diabetes Care 2015;38(Suppl. 1):S33–S40 |
New, young volcano discovered in the Pacific
Researchers from Tohoku University have discovered a new petit-spot volcano at the oldest section of the Pacific Plate. The research team, led by Associate Professor Naoto Hirano of the Center for Northeast Asian Studies, published their discovery in the in the journal Deep-Sea Research Part I.
Petit-spot volcanoes are a relatively new phenomenon on Earth. They are young, small volcanoes that come about along fissures from the base of tectonic plates. As the tectonic plates sink deeper into the Earth’s upper mantle, fissures occur where the plate begins to bend causing small volcanoes to erupt. The first discovery of petit-spot volcanoes was made in 2006 near the Japan Trench, located to the northeast of Japan.
Rock samples collected from previous studies of petit-spot volcanoes signify that the magma emitted stems directly from the asthenosphere — the uppermost part of Earth’s mantle which drives the movement of tectonic plates. Studying petit-spot volcanoes provides a window into the largely unknown asthenosphere giving scientists a greater understanding of plate tectonics, the kind of rocks existing there, and the melting process undergone below the tectonic plates.
The volcano was discovered in the western part of the Pacific Ocean, near Minamitorishima Island, Japan’s easternmost point, also known as Marcus Island. The volcano is thought to have erupted less than 3 million years ago due to the subduction of the Pacific Plate deeper into the mantle of the Marina Trench. Previously, this area is thought to have contained only seamounts and islands formed 70-140 million years ago.
The research team initially suspected the presence of a small volcano after observing bathymetric data collected by the Japan Coast Guard. They then analyzed rock samples collected by the Shnkai6500, a manned submersible that can dive to depths of 6,500 meters, which observed the presence of volcano.
“The discovery of this new Volcano provides and exciting opportunity for us to explore this area further, and hopefully reveal further petit-spot volcano,” says Professor Hirano. He adds, “This will tell us more about the true nature of the asthenosphere.” Professor Hirano and his team will continue to explore the site for similar volcanoes since mapping data demonstrates that the discovered volcano is part of a cluster. |
Autumn Leaf Suncatcher
Fall leaves are everywhere this time of year! When you head outside, see how the sunlight shines off the bright colors! Today, you’ll make a beautiful suncatcher with fall leaves. Why do leaves change color in the fall? Scientists ask a lot of questions like this, as the first step to learning more about the world. How many questions can you ask while you explore outdoors?
- Clear contact paper, cut into squares (4″ x 4″ or larger)
- Clear tape (optional)
Connect with Nature:
- Head outside and collect colored fall leaves that you find on the ground. The more colors the better!
- Take two squares of clear contact paper, and peel the paper backing off of one square.
- Choose the leaves that you want to include in your suncatcher and place them on the sticky side of the contact paper, making a cool design!
- Use scissors to cut the leaves into other shapes, or a hole punch to make small leaf circles to add to your design. (optional)
- When your design is finished, peel the backing off the other square of contact paper and lay its sticky side on top of your leaves. This seals the two sheets together with the leaves in between.
- Leave your suncatcher as a square, or cut it into a fun shape.
- Tape your suncatcher to a sunny window and enjoy the bright colors!
- Why do the colors of leaves change in the fall? Learn more about this amazing science at https://bit.ly/33EmCyb
- Track how the leaf colors change in your suncatcher over the next few weeks!
Before You Explore:
Phenology is the science of the relationship between climate and seasonal biological events (such as bird migration and plant flowering). Phenology is nature’s calendar!
- Phenology wheel template
- Colored pencils, markers, or crayons
Connect with Nature:
- Make your own phenology wheel using the example here. (Or search for ‘printable phenology wheel’ online to see other examples.)
- Collect a few things to write and draw with.
- In the slice of the wheel under each month, draw a picture of something important in nature that you observe in that month.
- In the center slices of the wheel, add something special that happened to you that month. Maybe a birthday, holiday or unforgettable weather!
- At the end of each year, you’ll have an awesome record of the natural world around you.
- Enjoy observing and tracking nature!
- Did you know that YOU can be a phenologist? Visit https://budburst.org/ to get started today being a Bud Burst Observer and reporting on events in nature in your neighborhood.
- Challenge yourself to create a daily phenology wheel for one month! Track the phases of the moon and daily weather. Find a one-month template here: https://bit.ly/2GDsC1p |
Do you have a little person in your life that is hard to understand? At Peninsula Speech Pathology Services we have a dedicated and experienced team to help children achieve their articulation goals!
What is articulation?
Articulation refers to the production of individual sounds required for speech. In the English language we have 44 sounds in total – much more than our the alphabet! In order to make these sounds we use our ‘articulators’ which are our teeth, lips, tongue, palate and vocal cords.
Amongst our 44 sounds, we have 24 consonant sounds and 20 vowel sounds.
Consonant sounds require a lot of coordinated input from our articulators, with each sound requiring the movement of one or more of our teeth, lips, tongue and palate.
- We use our lips to produce the ‘p’ and ‘b’ sounds.
- We use our tongue and palate to produce our ‘k’ and g’ sounds.
We also have ‘voiceless’ sounds, where our vocal cords are not used, and ‘voiced’ sounds where our vocal cords are used.
- ‘p’ is a voiceless sound where the vocal cords are not used.
- ‘b’ is a voices sound where the vocal cords are used.
Vowel sounds require our mouth to move into precise positions to achieve an accurate sound.
- ‘ee’ as in ‘tree’ is produced with our mouth in a smiling position.
- ‘oo’ as in ‘too’ is produced with our mouth in a rounded position.
With so many sounds to learn as well as language, it is common for children to have errors as they are learning to talk!
What is a speech sound disorder?
Children are learning so many new words in the early years of childhood. It is common for them to make mistakes as they learn these new words, which can make them hard to understand. A speech sound disorder occurs when a child continues to make these mistakes past a certain age; there are age ranges for when children are developmentally expected to say certain sounds accurately. There are different types of disorders depending on the type and level of the sound breakdown.
Articulation disorder: An articulation disorder involves difficulty using our ‘articulators’ to make individual sounds. As a result, sounds can be substituted, left off, added or changed.
- A child saying “nana” for “banana”
- A child saying “wabbit” for “rabbit”
Phonological disorder: A phonological process disorder involves difficulty with a pattern of sounds. It is considered to be part of normal development, however is deemed a disorder if it persists past a certain age. It involves all sounds produced in one area of the mouth being substituted for all sounds made in another area. For example:
- A child substituting all sounds made at the back of the mouth like “k” and “g” for those at the front of the mouth like “t” and “d”
- A child saying “tup” for “cup” and “dirl” for “girl”.
There are also much more sever speech sound disorders known as motor speech disorders. This is due to the brain having difficulty performing the planning and movements required for speech. The child knows what they want to say, however their brain has difficulty coordinating the movements required to say the words accurately. In children, this is called Childhood Apraxia of Speech and is far less common than the afore mentioned speech sound disorders.
How do I know if my child has a speech sound disorder?
A Speech Pathologist will be able to assess your child using a range of assessments to evaluate your child’s speech. They will be able to determine if there is a speech sound disorder present and will be able to develop goals guided by your family and assessment results.
What will the therapy involve?
After having assessed your child’s speech and establishing goals with the family, therapy begins! Sounds are usually treated in order of their acquisition. The sounds treated first would be the early developing lip sounds usually established by 3 (eg. m, p, b), then moving on to the palate sounds usually established by 5, and finally the sounds that require much more complex movements by 7.
When treating sounds in therapy, we work through a hierarchy and ensure the sound is established at each level before moving on:
- Isolation – ‘s’
- Syllables – ‘see’
- Words – ‘sun’
- Phrase – ‘my sock’
- Sentence – ‘I have a seat’
- Conversation – ‘I like to play in the sand at the beach when it is sunny.’
- Generalisation – Sound has generalised across a range of contexts and environments.
We may begin working on the sound at the beginning of words, middle of words or at the end of the word depending on whether your child can produce the sound in some positions but not others. The aim of therapy is to ensure they are able to say the sound in all positions of words and to have this generalised across a range of contexts and environments! |
Help! My child is very emotional and has lots of fits and/or tantrums. I’m tired of the screaming, fits, pouting and rage.
In order to better control himself and to interact with others, a child needs to be aware of what he is feeling and then be able act on those feelings. There are a number of ways to teach these two skills.
Finally you can teach your child to understand his real feelings by attaching appropriate language to what is happening. If a child says he hates his grandmother, it is probably because he does not have the language development to say something like ” I am really frustrated at grandmother for making me wear these itchy pants”. The adult needs to label the feelings for the child. Since a parent can not fully know what a child is feeling, you can guess. “I am wondering if you might be feeling frustration about not being able to pull your socks on”. Sometimes, adults teach children the names of just the basic feelings: sad, happy, and angry. We should also teach the nuances of feelings such as rage, anger, frustration and annoyance to develop even more self-awareness in children. |
Effector Functions of Antibodies
A limited time offer! Get a custom sample essay written according to your requirements urgent 3h delivery guaranteedOrder Now
Antibodies also known as immunoglobulins are secreted by plasma cells and B lymphocytes from the bone marrow and the lymphoid organs. The effector functions of antibodies are determined by the constant regions of the heavy chain. There are five different isotypes known in mammals to perform different roles and to direct a specific immune response for the antigen encountered. The binding of antigens to the variable regions will trigger the effector functions. Antibodies are only able to perform their functions upon entering the blood and the peripheral sites of infection. They prevent the entry of potential microbes through the epithelia. Antibodies are produced as early as the first week of vaccination or infection. Vaccination aims to produce long-lived plasma cells and memory cells.
During the primary response to a microbe, plasma cells help to secrete small amounts of antibodies for a long period of time. If there is a repeated attack to the antigen, there will be a more effective defence against the infection as memory cells differentiate into antibody producing cells. Antibodies have both antigen-binding (Fab) regions and Fc regions to carry out different functions. The Fab region binds to the microbe and toxins to block the harmful effects. The Fc regions consist of heavy chain constant regions and binds to phagocytes and complement. However, it requires the antigen recognition by the Fab region. Isotype switching and affinity maturation occur in antibodies produced by antigen-stimulated B lymphocytes in response to protein antigens. Isotype switching causes the production of antibodies with distinct Fc regions with different effector functions. IgG carries out neutralization of toxins and microbes, activates the classical pathway of the complement and opsonisation of antigens for phagocytosis.
It is the only class of immunoglobulin involved in neonatal immunity where it goes through the placenta and the gut to transfer the maternal antibodies. There are four subclasses which varies in the hinge region flexibility and glycosylation sites. There are some that respond to T-independent antigens like IgG2 and IgG4.IgM acts as the best complement-activating antibody and the first antibody to be made in response to the antigen. It activates the classical pathway of the complement and expressed on the surface of mature B cells. IgA is expressed in mucosal immunity such as gastric fluid and tears. It is where IgA is being secreted into the lumens of the respiratory and gastrointestinal tracts. It is involved in neutralization of microbes and toxins. It is a monomer in the serum and on mucosal surfaces, it acts as a dimer consisting of two four-chain units linked by the joining (J) chain. IgD does not have significant functional properties however it is expressed on surface of mature B cells and have a similar antigen-specificity with IgM.
IgE antibodies are essential to destroy helminthic parasites and diseases associated with allergies. The IgE antibody binds to the worms and FceRI which is the high-affinity Fc receptor for IgE aids in the attachment of eosinophils. Eosinophils kill helminths by releasing their granule contents after being activated. Mast cells may also be activated to secrete cytokines to attract leukocytes to destroy the worms. Antibodies help to neutralize microbial infectivity and toxins with host cells. Neutralization prevents microbes from gaining entry and infecting the host. It also prevents the spread of infection when microbes manage to enter host cells and infect neighbouring cells. Endotoxins or exotoxins bind to specific receptors and are responsible for the harmful effects. Opsonisation is the process where antibodies coat microbes for phagocytosis. Opsonins are the molecules involved in phagocytosis. For certain isotypes such as IgG1 and IgG3, the Fc regions of antibodies will bind to CD64 which is expressed on macrophages and neutrophils. The binding activates phagocytes.
There are massive substances such as nitric oxide, proteolytic enzymes in the lysosomes of the activated neutrophils or macrophages to kill the ingested microbe. Antibody dependent cellular cytotoxicity is a process whereby Natural killer cells are activated after expressing CD16, an Fc receptor to discharge their granules to kill the antibody coated cells. The complement system contains cell membrane and circulating proteins in defending the body against antimicrobial activity of antibodies. It may be activated by both the innate and adaptive immunity.
It is a series of cascading enzymatic events which leads to opsonisation, phagocytosis, lysis of microbe and stimulates inflammation. After activation of complement, complement proteins become cleaved and C3b proteins get attached to the surfaces. Cytolytic membrane attack complex occurs as one of the last stages of complement activation. Cell surface and circulating regulatory proteins are expressed by mammals to prevent inappropriate complement activation. Microbes have several mechanisms to evade humoral immunity.
Mutations of antigenic surface molecules allows many such bacteria and viruses to evade antibodies in infections. An example is antigenic variation in viruses such as human immunodeficiency virus (HIV).Due to the variants of the major antigenic surface glycoprotein called gp120 present in HIV, vaccines could not work for protecting people against this HIV infection. The antibodies cannot protect against other HIV isolates. Bacteria such as Escherichia coli can alter the antigens in the pili to evade antibodies. Other ways in which microbes evade humoral immunity is by resisting phagocytosis and inhibit complement activation.
1.Abbas A.K., Lichtman A.H. (2010) Basic Immunology: Functions and Disorders of the Immune System,3rd edition,Saunders Elsevier,California 2.Coico R., Sunshine G., Benjamini E. (2003)Immunology : A short course, 5th edition, John Wiley & Sons, Inc, New Jersey |
Table of Contents
Working Muscles Suffer And Anaerobic Modes Kick In
When blood volume is split among competing interests during exercise in heat, the next victim is active muscle.
Muscles engaged in activity suffer because they aren’t getting as much oxygen from the blood. For endurance athletes oxygen is gold; it’s the fuel that allows us to sustain exercise for longer durations, and without it we’re forced to rely more on pain-inducing anaerobic (without oxygen) modes of producing energy.
Increased anaerobic energy production affects exercise at all intensities and causes a slew of issues including higher total energy expenditure and blood lactate accumulation. Also, carbohydrates are used for energy more than lipids (fat), and since carbohydrate fuel stores are extremely limited in the body, exhaustion is reached much sooner.
In the end, this shift from aerobic to anaerobic modes will generally result in a faster onset of muscular fatigue. |
Capacitors are passive devices that are used in almost all electrical circuits for rectification, coupling and tuning. Also known as condensers, a capacitor is simply two electrical conductors separated by an insulating layer called a dielectric. The conductors are usually thin layers of aluminum foil, while the dielectric can be made up of many materials including paper, mylar, polypropylene, ceramic, mica, and even air. Electrolytic capacitors have a dielectric of aluminum oxide which is formed through the application of voltage after the capacitor is assembled. Characteristics of different capacitors are determined by not only the material used for the conductors and dielectric, but also by the thickness and physical spacing of the components.
Orange DropOrange Drop capacitors have set the standard for "modern" capacitors in appearance and performance since the 1960's.
Capacitors - Orange Drop, 600V, Polyester
Rated for 85°C operation. Polyester film capacitor with radial leads. ± 10% tolerance. Features a long history of proven reliability, a low dissipation factor, excellent stability, and a virtually linear temperature coefficient.
Starting at $0.79 |
The US Department of Education defines the difference in the following way:
Differentiation refers to instruction that is tailored to the learning preferences of different learners. Learning goals are the same for all students, but the method or approach of instruction varies according to the preferences of each student or what research has found works best for students like them.
Personalization refers to instruction that is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary.
As we have gone beyond differentiation and are looking at the individual child, the importance of using data collected about that child is of paramount importance. Last week's staff meeting was devoted to working in groups to look at data we have and to ask what more we need. Next week many of the teachers are participating in a 3-day workshop with trainers from the Data Wise group at Harvard University's Graduate School of Education.
The approach to assessment is also very different. Differentiated assessment is assessment for learning. It is formative with the aim of aligning further instruction with helping students meet the pre-determined learning outcomes. It provides the teacher with information about what the students know and can do in order to help them to design the next steps. Personalized assessment regards assessment as learning. It is the learners who self-assess and reflect on their progress and come up with their own new goals. Because assessment is different, reporting is different too. In personalized learning it is the learner's responsibility to identify their strengths, communicate their learning to others and to plan how they can develop further.
The differences between individualized, differentiated and personalized learning are explained very clearly in this chart by Barbara Bray and Kathleen McClaskey.
Photo Credit: by Ken Owen, 2011 |
One of the most important parts of the special education process is to create an educational plan for a student. That plan is called the Individual Education Program (IEP). The IEP is a written plan of the educational program designed to accomplish the individual needs of the student. Each student who receives special education services should have an IEP.
Your child's IEP should establish reasonable educational goals for your child, and provide detail about the services that the school district should offer for your child. For this reason, it is a very important document in your child's education — and you have the right to participate as a member of the team that develops it. The law is very clear regarding this right. In fact, the parents' contributions are invaluable. You know your child better than anyone, and the school needs to hear your insights and concerns.
The IEP is developed in a meeting with the members of the IEP team. It is important that you go to this meeting and share your ideas about your child's needs and potential. Afterwards, the school should implement the IEP as it was written.
When will the IEP meeting take place? It depends. There are different reasons that IEP meetings are held. A meeting should be scheduled with you at least once a year, to review your child's progress and to develop the next IEP of your child. A meeting should also be scheduled when:
- the plan is developed for the first time
- there is concern about the services your child is receiving
- it's necessary to review your child's progress
- there is a behavioral problem
- your child's placement is changing
- it's time to discuss the transition to adulthood
You can solicit this meeting when you feel it is necessary, and it's very important that you then attend as well.
You can learn more about the development, content, and revision of the IEP, and the importance of parent participation from the following websites:
The Individuals with Disabilities Education Act (IDEA) is the federal law that authorizes special education for children with disabilities in the United States. The law was amended in 2004. To read more about IDEA, take a look at the following websites: |
On this day in 1965, President Lyndon Baines Johnson signs the Voting Rights Act, guaranteeing African Americans the right to vote. The bill made it illegal to impose restrictions on federal, state and local elections that were designed to deny the vote to blacks.
Johnson assumed the presidency in November 1963 upon the assassination of President John F. Kennedy. In the presidential race of 1964, Johnson was officially elected in a landslide victory and used this mandate to push for legislation he believed would improve the American way of life, which included stronger voting-rights laws. A recent march in Alabama in support of voting rights, during which blacks were beaten by state troops, shamed Congress and the president into passing the law, meant to enforce the 15th Amendment of the Constitution ratified by Congress in 1870.
In a speech to Congress on March 15, 1965, Johnson had outlined the devious ways in which election officials denied African-American citizens the vote. Blacks attempting to vote were often told by election officials that they gotten the date, time or polling place wrong, that the officials were late or absent, that they possessed insufficient literacy skills or had filled out an application incorrectly. Often African Americans, whose population suffered a high rate of illiteracy due to centuries of oppression and poverty, would be forced to take literacy tests, which they inevitably failed. Johnson also told Congress that voting officials, primarily in southern states, had been known to force black voters to “recite the entire constitution or explain the most complex provisions of state laws”–a task most white voters would have been hard-pressed to accomplish. In some cases, even blacks with college degrees were turned away from the polls.
Although the Voting Rights Act passed, state and local enforcement of the law was weak and it was often outright ignored, mainly in the South and in areas where the proportion of blacks in the population was high and their vote threatened the political status quo. Still, the Voting Rights Act gave African-American voters the legal means to challenge voting restrictions and vastly improved voter turnout. In Mississippi alone, voter turnout among blacks increased from 6 percent in 1964 to 59 percent in 1969. In 1970, President Richard Nixon extended the provisions of the Voting Rights Act and lowered the eligible voting age for all voters to 18. |
Metacognitive skills are arguably the most important set of skills we need for our journey through life as they orchestrate every cognitive skill involved in problem-solving, decision-making and self-monitoring (both cognitive and socio-affective). We start acquiring them at a very early age at home, in school, in the playground and in any other social context an individual interacts with other human beings. But what are metacognitive skills?
What is metacognition?
I often refer to metacognition as ‘the voice inside your head’ which helps you solve problems in life by asking you questions like:
- What is the problem here?
- Based on what I know already about this task, how can I solve this problem?
- Is this correct?
- How is this coming along?
- If I carry on like this where am I going to get?
- What resources should I use to carry out this task?
- What should come first? What should come after?
- How should I pace myself? What should I do by when?
- Based on the criteria I am going to be evaluated against, how am I doing?
The challenge is not only to develop our students’ ability to ask themselves these questions, but also, and more importantly, to enable them to do this at the right time, in the right context and to respond to those questions promptly, confidently and effectively by applying adequate cognitive and social strategies.
How does one become highly ‘metacognizant’?
Let us look at two subjects from an old study of mine, student A and student B, in the examples below.The reader should note that the data below were elicited through a technique called concurrent think-aloud protocol (i.e. the two students were reflecting on the errors in their essays, whilst verbalizing their thoughts).
Self-questioning by student A:
Question: What is the problem here?
- Too many spelling mistakes
- I must check my essay more carefully with the help of the dictionary
- I also need to go through it more times than I currently do, I think
Self-questioning by student B:
Question 1: What is the problem in my essay?
- There are too many spelling mistakes
- I need to check my essay more thoroughly
- I rarely use the dictionary I usually trust my instinct
- I also need to go through it three or four times
Question 2: What are my most common spelling mistakes?
- Cognates, I get confused
- Longer words, I struggle with those, too
- I usually make most of my mistakes toward the end of the essay
- I also make mistakes in longer sentences
Question 3: But why in longer sentences?
- Maybe because I tend to focus on verbs and agreement more than I do on spelling
Both students identify the same problems with the accuracy in their essays. They both start with the same identical question, but Student B investigates it further through more self-questioning. In my study, which investigated metacognitive strategies, most of my informants tended to be more like student A; very few went spontaneously, without any prompt from me, as far as student B, in terms of metacognitive self-exploration.
How did student B become so highly metacognizant? Research indicates that, apart from genetic factors (which must not be discounted), the reason why some people become more highly metacognizant than others is because that behavior is modelled to them; in other words, caregivers, siblings, people in their entourage have regularly asked those questions in their presence and have used those questions many a time to guide them in problem solving or self-reflection. I cannot forget how my father kept doing that to me, day in day out since a very early age: ‘why do you think it is like this?’, ‘how could we fix this?’, ‘why do you think this statement is superficial?’, ‘how can you write this introduction better?’ – he would ask. I used to hate that, frankly, as I would have preferred to just get on with reading my favourite comics or watching tv; but it paid off. The intellectual curiosity, the habit of looking at different angles of the same phenomenon, the constant quest for self-improvement that I eventually acquired were ultimately modelled by those questions.
This is what a good teacher should do: spark off that process, by constantly modelling those questions, day in day out, in every single lesson, so as to get students to become more and more aware of themselves as language learners: what works for them and what doesn’t; what their strengths and weaknesses are and what they can do to best address them; how they can effectively tackle specific tasks; what cognitive or affective obstacles stand in the way of their learning; how they can motivate themselves; how can they best use the environment, the people around them, internet resources, etc. in a way that best suits them, etc.
Twelve easy steps to effective modelling of metacognitive-enhancing questioning
But how do we start, model and sustain that process? There are several approaches that one can undertake in isolation, or, synergistically. The most effective is Explicit Strategy Instruction, whereby the teacher presents to the students one or more strategies (e.g. using a mental checklist of one’s most common mistakes in editing one’s essay); tells the students why it/they can be useful in improving their performance (reduce grammatical, lexical and spelling errors); scaffolds it for weeks or months (e.g. asks them to create a written list of their most common mistakes to use every time they check an essay produced during the scaffolding period); then phases out the scaffolding and leaves the students to their own devices for a while; at the end of the training cycle, through various means, the teacher checks if the target strategy has been learnt or not.
The problem is, with two hours’ teacher contact time a week, doing the above properly is a very tall order, and the learning gains in terms of language proficiency may not justify the hassle. I implemented a Strategy Instruction program as part of my PhD study; it was as effective as time-consuming and I could afford it because I was a lecturer on a 14-hour time-table. Would I recommend it to a full-time secondary teacher in a busy UK secondary school? Not sure…So what can we do to promote metacognitive skills in the classroom?
There are small and useful steps we can take on a daily basis which can help, without massively adding to our already heavy workload. They involve more or less explicit ways of modelling metacognitive or metacognitive-enhancing self-questioning. Here are some of the 41 strategies I have brainstormed before writing this article.
- At the beginning of each lesson, after stating the learning intentions, ask the students how what they are going to learn may be useful/relevant to them (e.g. ‘Why are we learning this?’, ‘How is this going to help you be better speakers of French?’)
- Before starting a new activity ask the students how they believe it is related to the learning intentions; what and how they are going to learn from that activity (e.g. ‘Why are we doing this?’);
- On introducing a task, give an example of how you would carry out that activity yourself (whilst displaying it on the interactive whiteboard/screen) and take them through your thought processes. This is called ‘think-aloud’ in that you are verbalizing your thought processes, including the key-questions that trigger them (e.g.: I want to guess the meaning of the word ‘chère’ in the sentence “C’est une voiture chère”. I ask myself: is it a noun, an adjective,…? It is an adjective because it comes after the word ‘voiture’ which is a noun. Is it positive or negative? It must be positive because it I cannot see ‘pas’ here. Does it look like any English word I know? No, it doesn’t… but I have seen this word at the beginning of a letter as in ‘Chère Marie’… so it can mean ‘dear’ … How can a car be ‘dear’? Oh I get it: it means expensive. It is an expensive car!)
- At the end of a task, ask students to self-evaluate with the help of another student (functioning as a moderator, rather than a peer assessor) using a checklist of questions, the use of which you would have modelled through think-aloud beforehand. For the evaluation of a GCSE-like conversation this could include: Where the answers always pertinent? Was there a lot of hesitation? Was there a good balance of nouns, adjectives and verbs? Were there enough opinions? Were there many mistakes with verbs? Etc.
- Encourage student-generated metacognitive questioning by engaging students in group-work problem-solving activities. The rationale for working in a group on this kind of activities is that at least one or two of the students in the group (if not all of them) will ask metacognition-promoting questions and by so doing they will model them to the rest of group. If this type of activities become daily practice (in all lessons, not just MFL ones), the questions they generate might become in the long-term incorporated in one’s repertoire of thinking skills. Such activities may include: (1) inductive grammar tasks, where students are given examples of a challenging grammar structure and they have to figure out how the rules governing that structure work (see my activity on French negatives:https://www.tes.co.uk/teaching-resource/inductive-task-on-negatives-6316942 ) ; (2) inferring the meaning of unfamiliar words in context; (3) Real life problem solving tasks: planning a holiday and having to reserve tickets online, find out a hotel that suits a pre-defined budget, etc.
- Get students, after completing a challenging task, to ask themselves questions like: “what did I find difficult about it?”; “Why? ”; “What did I not know?”, “What will I need to know next time?”.
- On giving students back their corrected essays, scaffold self-monitoring skills by getting them to ask themselves: “Which ones of the mistakes I made in this essay do I make all the time?”, “Why?”, “What can I do to avoid them in the future?”
- Every now and then (do not overdo this), at key moments in the term, get the students to ask themselves questions about the way they learn, e.g. After telling them, concisely and using a fancy diagram (e.g. the curve of forgetting by Ebbinghaus) how and when forgetting occurs, ask them to reflect on what distracts them in class or at home and what one can do to eliminate those distracting factors;
- At the beginning of each school year, to get them into a reflective mood and to gain a valuable insight into their learning habits and issues, ask them to keep a concise reflective journal to write at end of each week with a few retrospective questions about their learning that week. Avoid questions like: “What have I learnt this week?” Focus on questions aimed at eliciting problems about their learning and what they or you can do to address them.
- Ask them, whilst writing an essay, to review the final draft of the essay and ask themselves the question: “What is that I am not sure about?” and ask them to highlight every single item in that essay evoked by that question.
- Ask them, at the end of a lesson, to fill in a google form or just write on a piece of paper to hand in to you the answer to the questions: “What activity benefitted me the most today? Why?”
- Ask your students to think about the ways they reduce their anxiety in times of stress (e.g. the run-up to the French end-of-year exams?); do they always work? Are there any other techniques they can think of to keep stress at bay? Are there any other techniques ‘out there’ (e.g. on the Internet) that might work better? I have done this with a year 8 class of mine and I was truly amazed at the amount of effort they put into researching (at home, of course) self-relaxation techniques and at the quality of their findings (which they shared with their classmates).
It goes without saying that there are classes with which one would be able to do all of the above and others where one will be lucky if one can use one or two of the above strategies. It is also important to keep in mind that by over-intellectualizing language learning in the classroom you may lose some of the students; hence one should use those strategies regularly but judiciously and, most importantly, to serve language learning – not to hijack the focus of the lesson away from it . The most important thing is that the students are exposed to them on a daily basis until they are learnt ‘by osmosis’ so to speak.
Metacognitive literacy and explicit instruction
Ideally, the modelling and fostering of metacognitive self-questioning will be but the beginning of a more explicit and conscious process on the part of the teacher, to, once s/he believes the student have reached the maturity necessary to do so, impart on them a metacognitive literacy program. By this I mean that, just as we assign a name, in literacy instruction, to each part of speech or word class (e.g. adjective, noun, ect.), we should also acquaint them with what each metacognitive strategy is called, what purpose it serves and which of the questions modelled to them over the months or years it relates to. The importance of sharing a common language is crucial in any kind of learning, especially when dealing with high order thinking skills. After all, as Wittgenstein said; “The limits of my language, are the limits of my world”.
Once that common language is well-established in the classroom, the implicit metacognitive modelling that the teacher has embedded regularly in his/her lesson can be made explicit and strategy training can be implemented using the framework that I have already outlined above and that I reserve to discuss at greater length in a future post:
1. Strategies are named and presented
2. Strategies are modelled
3. Strategies are practised with scaffolding
4. Strategies are used without scaffolding
5. Strategies uptake is verified by test and/or verbal report |
The cocoa tree – theobroma cocoa – thrives in warm and humid regions near the equator. Although its origins have not been conclusively proven, cocoa was probably already of great importance to ancient Latin American civilizations 2000 years before the birth of Christ. Historians have found the oldest cups and plates for eating and drinking cocoa in the small village of Ulúa in Honduras. These utensils were presumably used exclusively for preparing and enjoying Xocoatl, the original cocoa drink. Today cocoa is cultivated primarily in the tropical rain forests of West Africa, Asia and Latin America. The major cocoa-producing countries are Ivory Coast, Ghana and Indonesia.
There are three distinct varieties of cocoa trees along with numerous subspecies that have been created through cross-breeding:
- The Criollo tree is a rare variety and produces aromatic and very fine cocoa beans. These trees mainly grow in Latin America. They are very vulnerable to meteorological changes and often have low yields.
- The high-yielding Forastero tree, which is grown on almost every cocoa plantation in West Africa, supplies about 70% of the world’s cocoa crop.
- The Trinitario is a hybrid created by cross-breeding the two varieties above, combining the high yield of the Forastero with the delicate flavor of the Criollo beans.
The unique, distinctive taste of cocoa beans varies according to the region, the weather and the soil conditions where they are cultivated.
Beans from different countries are mixed in most chocolate recipes to ensure uniform taste and quality. There is also a pure chocolate called origin chocolate, i.e. unblended chocolate made of cocoa beans that were grown in one particular region. This means every single origin chocolate has a characteristic flavor, a fact that is explicitly cultivated in pure origin chocolate to the delight of chocolate connoisseurs worldwide. |
# Lesson Date 18 January 2019 # Stage Focus KS2 # Curriculum Area Maths # Whole Class/Group Whole Class # Learning Intentions Improve multiplicaton # Shared Success Criteria Everyone understands even multipliers always produce even numbers # Teaching Strategy(ies) Use questions to guide class to noticing the pattern. # Assessment Method(s) Q&A # Main Opportunities for Developing Cross-Curricular Links 1. Link into work on Egypt. Specifically architecture. 1. Potential to link into science lesson next week. # Differentiation None. # Resources - Times Tables Rockstars (Online) - Standard workbooks. - Classroom wall charts. # Opening Mental arithmetic # Opening: Teacher Role Provide questions # Opening: Timing 5 minutes # Development Exercises, group and individual Group work on computer Individual work on counters # Development: Teacher Role Set tasks. Provide support. # Development: Timing 30 minutes # Close Group review # Close: Teacher Role Provide questions and guide discussion. # Close: Timing 10 minutes # Which children did well – what and why? John Smith. He spotted the pattern without being prompted. # Which children did not do well – what and why? Fred Jones. He was disruptive to the other children in his group. # What incidental learning, if any, occurred? Some discussion about multiplication of two odd numbers and whether the result was odd, even or potentially either. # What might be the next steps for children’s learning? Look at same topic, but from a division perspective.
After Success Default Notification Info Log Level Info
Actions available in the Action Directory are uploaded by community members. Use appropriate caution reviewing downloaded actions before use. |
The kelvin (K) is the base unit of thermodynamic temperature in the SI system of units. It is equal
to the fraction 1/273.16 of the thermodynamic temperature of the triple
point of water. One kelvin corresponds
to an interval of one degree on the Celsius (centigrade) scale, so that the freezing and boiling points of water, at
standard pressure, are 273 K and 373 K, respectively. The temperature of absolute zero is 0 K (-273.16°C).
The unit is named after the Scottish physicist Lord Kelvin (William Thomson).
|Comparison of the Fahrenheit,
Celsius, and Kelvin scales |
Species Status and Fact Sheet
DESCRIPTION: The whooping crane is the tallest North American bird. Males, which may approach 1.5 meters in height, are larger than females. Adults are snowy white except for black primary feathers on the wings and a bare red face and crown. The bill is a dark olive-gray, which becomes lighter during the breeding season. The eyes are yellow and the legs and feet are gray-black. Immature cranes are a reddish cinnamon color that results in a mottled appearance as the white feather bases extend. The juvenile plumage is gradually replaced through the winter months and becomes predominantly white by the following spring as the dark red crown and face appear. Yearlings achieve the typical adult appearance by late in their second summer or fall. The life span is estimated to be 22 to 24 years in the wild. Whooping cranes are omnivorous feeders. They feed on insects, frogs, rodents, small birds, minnows, and berries in the summer. In the winter, they focus on predominantly animal foods, especially blue crabs and clams. They also, forage for acorns, snails, crayfish and insects in upland areas.
REPRODUCTION AND DEVELOPMENT: Whooping cranes are monogamous and form life-long pair bonds but will remate following the death of a mate. Whooping cranes return to the same breeding territory in Wood Buffalo National Park, Canada, in April and nest in the same general area each year. They construct nests of bulrush and lay one to three eggs, (usually two) in late April and early May. The incubation period is about 29 to 31 days. Whooping cranes will renest if the first clutch is lost or destroyed before mid-incubation. Both sexes share incubation and brood-rearing duties. Despite the fact that most pairs lay two eggs, seldom does more than one chick reach fledging. Autumn migration begins in mid-September, and most birds arrive on the wintering grounds of Aransas National Wildlife Refuge on the Texas Gulf Coast by late-October to mid-November. Whooping cranes migrate singly, in pairs, in family groups or in small flocks, and are sometimes accompanied by sandhill cranes. They are diurnal migrants, stopping regularly to rest and feed, and use traditional migration staging areas. On the wintering grounds, pairs and family groups occupy and defend territories. Subadults and unpaired adult whooping cranes form separate flocks that use the same habitat but remain outside occupied territories. Subadults tend to winter in the area where they were raised their first year, and paired cranes often locate their first winter territories near their parents' winter territory. Spring migration is preceded by dancing, unison calling, and frequent flying. Family groups and pairs are the first to leave the refuge in late-March to mid-April.
Juveniles and subadults return to summer in the vicinity of their natal area, but are chased away by the adults during migration or shortly after arrival on the breeding grounds. Only one out of four hatched chicks survive to reach the wintering grounds. Whooping cranes generally do not produce fertile eggs until age 4.
RANGE AND POPULATION LEVEL:
Historic: The historic range of the whooping crane once extended from the Arctic coast south to central Mexico, and from Utah east to New Jersey, into South Carolina, Georgia, and Florida. The historic breeding range once extended across the north-central United States and in the Canadian provinces, Manitoba, Saskatchewan, and Alberta. A separate non-migratory breeding population occurred in southwestern Louisiana.
Aransas/Wood Buffalo Population: The current nesting range of the self-sustaining natural wild population is restricted to Wood Buffalo National Park in Saskatchewan, Canada and the current wintering grounds of this population are restricted to the Texas Gulf Coast at Aransas National Wildlife Refuge and vicinity. is experiencing a gradual positive population trend overall, although some years exhibit stationary or negative results. In January, 2000, there were 187 individuals in the flock, including 51 nesting pairs.
Rocky Mountain Experiment: In 1975, an effort to establish a second, self-sustaining migratory flock was initiated by transferring wild whooping crane eggs from Wood Buffalo National Park to the nests of greater sandhill cranes at Grays Lake National Wildlife Refuge in Idaho. This Rocky Mountain population peaked at only 33 birds in 1985. The experiment terminated in 1989 because the birds were not pairing and the mortality rate was too high to establish a self-sustaining population. In 1997, the remaining birds in the population were designated as experimental, non-essential to allow for greater management flexibility and to begin pilot studies on developing future reintroduction methods. In 2001, there were only two remaining whooping cranes in this population.
Captive Populations: As of March, 2001, there were 120 captive whooping cranes held at six facilities. Four facilities: Patuxent Wildlife Research Center, International Crane Foundation, Calgary Zoo, and San Antonio Zoo have successful breeding programs. Currently, the remaining facilities, Lowry Park Zoo and Audubon Institute house cranes for rehabilitative and educational purposes. Chicks produced at the captive facility either remain in captivity to maintain the health and genetic diversity of the captive flock, or are reared for release to the wild in the experimental reintroduction programs.
Florida Experimental Nonessential Population: An experimental reintroduction of whooping cranes in Florida was initiated in 1993 to establish a non-migratory population at Kissimmee Prairie. A nonmigratory population avoids the hazards of migration, and by inhabiting a more geographically limited area than migratory cranes, individuals can more easily find compatible mates. Since 1993, 233 isolation-reared whooping cranes have been released in the area. In Spring 2000, there were 65 individuals in the project area with 10 pairs defending territories and evidence of the first successful hatching of chicks. Annual releases of chicks are expected to continue to augment this new experimental population.
Eastern Migratory Population: A second experimental non-essential population is currently being reintroduced to eastern North America. The intent is to establish a migratory flock which would summer and breed in central Wisconsin, migrate across the seven states and winter in west-central Florida. The birds are taught the migration route after being conditioned to follow costumed pilots in ultralight aircraft. Initial experiments using sandhill cranes, completed in the Fall of 2000, successfully led 11 cranes 1,250 miles from Necedah National Wildlife Refuge in Wisconsin to Chassahowitzka National Wildlife Refuge in Florida. The birds winter in Florida and then migrate back to Wisconsin on their own in the Spring.
Following this success, the first attempt to lead whooping cranes was made in 2001. Seven birds made it to Florida and the five that survived the winter returned to central Wisconsin the following spring. An additional 16 birds were successfully reintroduced to the flyway in 2002. To date, 20 of the original 24 whooping cranes reintroduced have survived and adapted to the wild. Updated information on this project is available online at http://www.bringbackthecranes.org
HABITAT: The nesting area in Wood Buffalo National Park is a poorly drained region interspersed with numerous potholes. Bulrush is the dominant emergent in the potholes used for nesting. On the wintering grounds at Aransas National Wildlife Refuge in Texas, whooping cranes use the salt marshes that are dominated by salt grass, saltwort, smooth cordgrass, glasswort, and sea ox-eye. They also forage in the interior portions of the refuge, which are gently rolling, sandy, and are characterized by oak brush, grassland, swales, and ponds. Typical plants include live oak, redbay, Bermuda grass, and bluestem. The non-migratory, Florida release site at Kissimmee Prairie includes flat, open palmetto prairie interspersed with shallow wetlands and lakes. The primary release site has shallow wetlands characterized by pickerel weed, nupher, and maiden cane. Other habitats include dry prairie and flatwoods with saw palmetto, various grasses, scattered slash pine, and scattered strands of cypress. Areas selected for the proposed eastern migratory experimental population closely mimic habitat of the naturally occurring wild population in Canada and Texas.
REASONS FOR CURRENT STATUS: The whooping crane population, estimated at 500 to 700 individuals in 1870 declined to only 16 individuals in the migratory population by 1941 as a consequence of hunting and specimen collection, human disturbance, and conversion of the primary nesting habitat to hay, pastureland, and grain production. The main threat to whooping cranes in the wild is the potential of a hurricane or contaminant spill destroying their wintering habitat on the Texas coast. Collisions with power lines and fences are known hazards to wild whooping cranes. The primary threats to captive birds are disease and parasites. Bobcat predation has been the main cause of mortality in the Florida experimental population.
MANAGEMENT AND PROTECTION: The self-sustaining wild population is protected on public lands in the nesting area at Wood Buffalo National Park in Canada and on the principal wintering area at Aransas National Wildlife Refuge in Texas. A major traditional migratory stopover is at Salt Plains National Wildlife Refuge in Oklahoma. This population is closely monitored throughout the nesting season, on the wintering grounds, and during migration. The Canadian Wildlife Service and the U.S. Fish and Wildlife Service are involved in recovery efforts under a 1990 Memorandum of Understanding (MOU), "Conservation of the Whooping Crane Related to Coordinated Management Activities." All cranes within the Rocky Mountain, Florida non-migratory and proposed eastern migratory non-essential, experimental population areas are fully protected as a threatened species (instead of endangered), but other provisions of the Endangered Species Act are relaxed to allow for greater management flexibility as well as positive public support
For more information please contact:Mr. Tom Stehn Whooping Crane Coordinator Aransas National Wildlife Refuge P.O. Box 100 Austwell, TX 77950 (361) 286-3559 Email: [email protected] |
There was a medieval tradition according to which the Greek philosopher Parmenides (5th century BC BCE) invented logic while living on a rock in Egypt. The story is pure legend, but it does reflect the fact that Parmenides was the first philosopher to use an extended argument for his views , rather than merely proposing a vision of reality. But using arguments is not the same as studying them, and Parmenides never systematically formulated or studied principles of argumentation in their own right. Indeed, there is no evidence that he was even aware of the implicit rules of inference used in presenting his doctrine.
Perhaps Parmenides’ use of argument was inspired by the practice of early Greek mathematics among the Pythagoreans. Thus, it is significant that Parmenides is reported to have had a Pythagorean teacher. But the history of Pythagoreanism in this early period is shrouded in mystery, and it is hard to separate fact from legend.
If Parmenides was not aware of general rules underlying his arguments, the same perhaps is not true for his disciple Zeno of Elea (5th century BC BCE). Zeno was the author of many arguments, known collectively as “Zeno’s Paradoxes,” purporting to infer impossible consequences from a non-Parmenidean view of things and so to refute such a view and indirectly to establish Parmenides’ monist position. The logical strategy of establishing a claim by showing that its opposite leads to absurd consequences is known as reductio ad absurdum. The fact that Zeno’s arguments were all of this form suggests that he recognized and reflected on the general pattern.
Other authors too contributed to a growing Greek interest in inference and proof. Early rhetoricians and sophists—eSophists—e.g., Gorgias, Hippias, Prodicus, and Protagoras (all 5th - century BC BCE)—cultivated the art of defending or attacking a thesis by means of argument. This concern for the techniques of argument on occasion merely led to verbal displays of debating skills, what Plato called “eristic.” But it is also true that the sophists Sophists were instrumental in bringing argumentation to the central position it came uniquely to hold in Greek thought. The sophists Sophists were, for example, among the first people anywhere to demand that moral claims be justified by reasons.
Certain particular teachings of the sophists Sophists and rhetoricians are significant for the early history of logic. For example, Protagoras is reported to have been the first to distinguish different kinds of sentences: questions, answers, prayers, and injunctions. Prodicus appears to have maintained that no two words can mean exactly the same thing. Accordingly, he devoted much attention to carefully distinguishing and defining the meanings of apparent synonyms, including many ethical terms.
Socrates (c. 470–399 BC BCE) is said to have attended Prodicus’ Prodicus’s lectures. Like Prodicus, he pursued the definitions of things, particularly in the realm of ethics and values. These investigations, conducted by means of debate and argument as portrayed in the writings of Plato (428/427–348/347 BC BCE), reinforced Greek interest in argumentation and emphasized the importance of care and rigour in the use of language.
Plato continued the work begun by the sophists Sophists and by Socrates. In the Sophist, he distinguished affirmation from negation and made the important distinction between verbs and names (including both nouns and adjectives). He remarked that a complete statement (logos) cannot consist of either a name or a verb alone but requires at least one of each. This observation indicates that the analysis of language had developed to the point of investigating the internal structures of statements, in addition to the relations of statements as a whole to one another. This new development would be raised to a high art by Plato’s pupil Aristotle (384–322 BC BCE).
There are passages in Plato’s writings where he suggests that the practice of argument in the form of dialogue (Platonic “dialectic”) has a larger significance beyond its occasional use to investigate a particular problem. The suggestion is that dialectic is a science in its own right, or perhaps a general method for arriving at scientific conclusions in other fields. These seminal but inconclusive remarks indicate a new level of generality in Greek speculation about reasoning.
The logical Only fragments of the work of all these men, important as it was, must be regarded as piecemeal and fragmentary. None of them was engaged in the systematic, sustained investigation of inference in its own right. That these thinkers are relevant to what is now considered logic. The systematic study of logic seems to have been done undertaken first by Aristotle. Although Plato used dialectic as both a method of reasoning and a means of philosophical training, Aristotle established a system of rules and strategies for such reasoning. At the end of his Sophistic Refutations, Aristotle acknowledges that in most cases new he acknowledges the novelty of his enterprise. In most cases, he says, discoveries rely on previous labours by others, so that, while those others’ achievements may be small, they are seminal. But then he adds:
Of the present inquiry, on the other hand, it was not the case that part of the work had been thoroughly done before, while part had not. Nothing existed at allall…. . . . [O]n the subject of deduction we had absolutely nothing else of an earlier date to mention, but were kept at work for a long time in experimental researches.
(From The Complete Works of Aristotle: The Revised Oxford Translation, ed. Jonathan Barnes, 1984, by permission of Oxford University Press.)
Aristotle’s logical writings comprise six works, known collectively as the Organon (“Tool”). The significance of the name is that logic, for Aristotle, was not one of the theoretical sciences. These were physics, mathematics, and metaphysics. Instead, logic was a tool used by all the sciences. (To say that logic is not a science in this sense is in no way to deny it is a rigorous discipline. The notion of a science was a very special one for Aristotle, most fully developed in his Posterior Analytics.)
Aristotle’s logical works, in their traditional but not chronological order, are:Categories, which discusses Aristotle’s 10 basic kinds of entities: substance, quantity, quality, relation, place, time, position, state, action, and passion. Although the Categories is always included in the Organon, it has little to do with logic in the modern sense.De interpretatione (On Interpretation), which includes a statement of Aristotle’s semantics, along with a study of the structure of certain basic kinds of propositions and their interrelations.Prior Analytics (two books), containing the theory of syllogistic (described below).Posterior Analytics (two books), presenting Aristotle’s theory of “scientific demonstration” in his special sense. This is Aristotle’s account of the philosophy of science or scientific methodology.Topics (eight books), an early work, which contains a study of nondemonstrative reasoning. It is a miscellany of how to conduct a good argument.Sophistic Refutations, a discussion of various kinds of fallacies. It was originally intended as a ninth book of the Topics.
The last two of these works present Aristotle’s theory of interrogative techniques as a universal method of knowledge seeking. The practice of such techniques in Aristotle’s day was actually competitive, and Aristotle was especially interested in strategies that could be used to “win” such “games.” Naturally, the ability to predict the “answer” that a certain line of questioning would yield represented an important advantage in such competitions. Aristotle noticed that in some cases the answer is completely predictable—viz., when it is (in modern terminology) a logical consequence of earlier answers. Thus, he was led from the study of interrogative techniques to the study of the subject matter of logic in the narrow sense—that is, of relations of logical consequence. These relations are the subject matter of the four other books of the Organon. Aristotle nevertheless continued to conceive of logical reasoning as being conducted within an interrogative framework.
This background helps to explain why for Aristotle logical inferences are psychologically necessary. According to him, when the premises of an inference are such as to “form a single opinion,” “the soul must…affirm the conclusion.” The mind of the reasoner, in other words, cannot help but adopt the conclusion of the argument. This conception distinguishes Aristotle’s logic sharply from modern logic, in which rules of inference are thought of as permitting the reasoner to draw a certain conclusion but not as psychologically compelling him to do so.
Aristotle’s logic was a term logic, in the following sense. Consider the schema: “If every β is an α and every γ is a β, then every γ is an α.” The “α,” “β,” and “γ” are variables—ivariables—i.e., placeholders. Any argument that fits this pattern is a valid syllogism and, in fact, a syllogism in the form known as Barbara . (On on this terminology, see below Syllogisms).)
The variables here serve as placeholders for terms or names. Thus, replacing “α” by “substance,” “β” by “animal,” and “γ” by “dog” in the schema yields: “If every animal is a substance and every dog is an animal, then every dog is a substance,” a syllogism in Barbara. Aristotle’s logic was a term logic in the sense that it focused on logical relations among between such terms in valid inferences.
Aristotle was the first logician to use variables. This innovation was tremendously important, since without them it would have been impossible for him to reach the level of generality and abstraction that he did.
Most of Aristotle’s logic was concerned with certain kinds of propositions that can be analyzed as consisting of (1) usually a quantifier (“every,” “some,” or the universal negative quantifier “no”), (2) a subject, (3) a copula, (4) perhaps a negation (“not”), (5) a predicate. Propositions analyzable in this way were later called categorical propositions and fall into one or another of the following forms:Universal affirmative: “Every β is an α.”Universal negative: “Every β is not an α,” or equivalently “No β is an α.”Particular affirmative: “Some β is an α.”Particular negative: “Some β is not an α.”Indefinite affirmative: “β is an α.”Indefinite negative: “β is not an α.”Singular affirmative: “x is an α,” where “x” refers to only one individual (e.g., “Socrates is an animal”).Singular negative: “x is not an α,” with “x” as before.
Sometimes, and very often in the Prior Analytics, Aristotle adopted alternative but equivalent formulations. Instead of saying, for example, “Every β is an α,” he would say, “α belongs to every β” or “α is predicated of every β.”
In syllogistic, singular propositions (affirmative or negative) were generally ignored, and indefinite affirmatives and negatives were treated as equivalent to the corresponding particular affirmatives and negatives. In the Middle Ages, propositions of types 1–4 were said to be of forms A, E, I, and O, respectively. This notation will be used below.
In the De interpretatione Aristotle discussed ways in which affirmative and negative propositions with the same subjects and predicates can be opposed to one another. He observed that when two such propositions are related as forms A and E, they cannot be true together but can be false together. Such pairs Aristotle called contraries. When the two propositions are related as forms A and O or as forms E and I or as affirmative and negative singular propositions, then it must be that one is true and the other false. These Aristotle called contradictories. He had no special term for pairs related as forms I and O, although they were later called subcontraries. Subcontraries cannot be false together, although, as Aristotle remarked, they may be true together. The same holds for indefinite affirmatives and negatives, construed as equivalent to the corresponding particular forms. Note that if a universal proposition (affirmative or negative) is true, its contradictory is false, and so the subcontrary of that contradictory is true. Thus, propositions of form A imply the corresponding propositions of form I, and those of form E imply those of form O. These last relations were later called subalternation, and the particular propositions (affirmative or negative) were said to be subalternate to the corresponding universal propositions.
Near the beginning of the Prior Analytics, Aristotle formulated several rules later known collectively as the theory of conversion. To “convert” a proposition in this sense is to interchange its subject and predicate. Aristotle observed that propositions of forms E and I can be validly converted in this way: if no β is an α, then so too no α is a β, and if some β is an α, then so too some α is a β. In later terminology, such propositions were said to be converted “simply” (simpliciter). But propositions of form A cannot be converted in this way; if every β is an α, it does not follow that every α is a β. It does follow, however, that some α is a β. Such propositions, which can be converted provided that not only are their subjects and predicates interchanged but also the universal quantifier is weakened to a particular quantifier “some,” were later said to be converted “accidentally” (per accidens). Propositions of form O cannot be converted at all; from the fact that some animal is not a dog, it does not follow that some dog is not an animal. Aristotle used these laws of conversion in later chapters of the Prior Analytics to reduce other syllogisms to syllogisms in the first figure, as described below.
Aristotle defined a syllogism as “discourse in which, certain things being stated something other than what is stated follows of necessity from their being so.” so” (From from The Complete Works of Aristotle: The Revised Oxford Translation, ed. by Jonathan Barnes, 1984, by permission of Oxford University Press). ) But in practice he confined the term to arguments containing two premises and a conclusion, each of which is a categorical proposition. The subject and predicate of the conclusion each occur in one of the premises, together with a third term (the middle) that is found in both premises but not in the conclusion. A syllogism thus argues that because α and γ are related in certain ways to β (the middle) in the premises, they are related in a certain way to one another in the conclusion.
The predicate of the conclusion is called the major term, and the premise in which it occurs is called the major premise. The subject of the conclusion is called the minor term and the premise in which it occurs is called the minor premise. This way of describing major and minor terms conforms to Aristotle’s actual practice and was proposed as a definition by the 6th-century Greek commentator John Philoponus. But in one passage Aristotle put it differently: the minor term is said to be “included” in the middle and the middle “included” in the major term. This remark, which appears to have been intended to apply only to the first figure (see below), has caused much confusion among some of Aristotle’s commentators, who interpreted it as applying to all three figures.
Aristotle distinguished three different figures of syllogisms, according to how the middle is related to the other two terms in the premises. In one passage, he says that if one wants to prove α of γ syllogistically, one finds a middle β such that either α is predicated of β and β of γ (first figure), or β is predicated of both α and γ (second figure), or else both α and γ are predicated of β (third figure). All syllogisms must fall into one or another of these figures.
But there is plainly a fourth possibility, that β is predicated of α and γ of β. Many later logicians recognized such syllogisms as belonging to a separate, fourth figure. Aristotle explicitly mentioned such syllogisms but did not group them under a separate figure; his failure to do so has prompted much speculation among commentators and historians. Other logicians included these syllogisms under the first figure. The earliest to do this was Theophrastus (see below Theophrastus of Eresus), who reinterpreted the first figure in so doing.
Four figures, each with three propositions in one of four forms (A, E, I, O), yield a total of 256 possible syllogistic patterns. Each pattern is called a mood. Only 24 moods are valid, 6 in each figure. Some valid moods may be derived from others by subalternation—that subalternation; that is, if premises validly yield a conclusion of form A, the same premises will yield the corresponding conclusion of form I. So too with forms E and O. Such derived moods were not discussed by Aristotle; they seem to have been first recognized by Ariston of Alexandria (c. 50 BC BCE). In the Middle Ages they were called “subalternate” moods. Disregarding them, there are 4 valid moods in each of the first two figures, 6 in the third figure, and 5 in the fourth. Aristotle recognized all 19 of them.
Here Following are the valid moods, including subalternate ones, under their medieval mnemonic names (subalternate moods are marked with an asterisk):
TBFirst figure: TL Barbara, Celarent, Darii, Ferio,TL
TBSecond figure: TL Cesare, Camestres, Festino, Baroco,TL
TBThird figure: TL Darapti, Disamis, Datisi, Felapton,TL
TBFourth figure: TL Bramantip, Camenes, Dimaris, Fesapo,TL
The sequence of vowels in each name indicates the sequence of categorical propositions in the mood in the order: major, minor, conclusion. Thus, for example, Celarent is a first-figure syllogism with an E-form major, A-form minor, and E-form conclusion.
If one assumes the nonsubalternate moods of the first figure, then, with two exceptions, all valid moods in the other figures can be proved by “reducing” them to one of those “axiomatic” first-figure moods. This reduction shows that, if the premises of the reducible mood are true, then it follows, by rules of conversion and one of the axiomatic moods, that the conclusion is true. The procedure is encoded in the medieval names:The initial letter is the initial letter of the first-figure mood to which the given mood is reducible. Thus, Felapton is reducible to Ferio.When it is not the final letter, “s” s after a vowel means “Convert the sentence simply,” and “p” p there means “Convert the sentence per accidens.”When “s” s or “p” p is the final letter, the conclusion of the first-figure syllogism to which the mood is reduced must be converted simply or per accidens, respectively.The letter “m” m means “Change the order of the premises.”When it is not the first letter, “c” c means that the syllogism cannot be directly reduced to the first figure but must be proved by reductio ad absurdum. (There are two such moods; see below.)The letters “b” b and “d” d (except as initial letters) and “l l, ” “n n, ” “t t, ” and “r” r serve only to facilitate pronunciation.
Thus, the premises of Felapton (third figure) are “No β is an α” and “Every β is a γ.” Convert the minor premise per accidens to “Some γ is a β,” as instructed by the “p” after the second vowel. This new proposition and the major premise of Felapton form the premises of a syllogism in Ferio (first figure), the conclusion of which is “Some γ is not an α,” which is also the conclusion of Felapton. Hence, given Ferio and the rule of per accidens conversion, the premises of Felapton validly imply its conclusion. In this sense, Felapton has been “reduced” to Ferio.
The two exceptional cases, which must be proven proved indirectly by reductio ad absurdum, are Baroco and Bocardo. Both are reducible indirectly to Barbara in the first figure as follows: Assume the A-form premise (the major in Baroco, the minor in Bocardo). Assume the contradictory of the conclusion. These yield a syllogism in Barbara, the conclusion of which contradicts the O-form premise of the syllogism to be reduced. Thus, given Barbara as axiomatic, and given the premises of the reducible syllogism, the contradictory of its conclusion is false, so that the original conclusion is true.
Reduction and indirect proof together suffice to prove all moods not in the first figure. This fact, which Aristotle himself showed, makes his syllogistic the first deductive system in the history of logic.
Aristotle sometimes used yet another method of showing the validity of a syllogistic mood. Known as ekthesis (sometimes translated as “exposition”), it consists of choosing a particular object to represent a term—e.g., choosing one particular triangle to represent all triangles in geometric reasoning. The method of ekthesis is of great historical interest, in part because it amounts to the use of instantiation rules (rules that allow the introduction of an arbitrary individual having a certain property), which are the mainstay of modern logic. The same method was used under the same name also in Greek mathematics. Although Aristotle seems to have avoided the use of ekthesis as much as possible in his syllogistic theory, he did not manage to eliminate it completely. The likely reason for his aversion is that the method involved considering particulars and not merely general concepts. This was foreign to Aristotle’s way of thinking, according to which particulars can be grasped by sense perception but not by pure thought.
While the medieval names of the moods contain a great deal of information, they provide no way by themselves to determine to which figure a mood belongs , and so no way to reconstruct the actual form of the syllogism. Mnemonic verses were developed in the Middle Ages for this purpose.
Categorical propositions in which α is merely said to belong (or not) to some or every β are called assertoric categorical propositions; syllogisms composed solely of such categoricals are called assertoric syllogisms. Aristotle was also interested in categoricals in which α is said to belong (or not) necessarily or possibly to some or every β. Such categoricals are called modal categoricals, and syllogisms in which the component categoricals are modal are called modal syllogisms (they are sometimes called “mixed” if only one of the premises is modal).
Aristotle discussed two notions of the “possible”: (1) as what is not impossible (i.e., the opposite of which is not necessary) and (2) as what is neither necessary nor impossible (i.e., the contingent). In his modal syllogistic, the term “possible” (or “contingent”) is always used in sense 2 in syllogistic premises, but it is sometimes used in sense 1 in syllogistic conclusions if a conclusion in sense 2 would be incorrect.
Aristotle’s procedure in his modal syllogistic is to survey each valid mood of the assertoric syllogistic and then to test the several modal syllogisms that can be formed from an assertoric mood by changing one or more of its component categoricals into a modal categorical. The interpretation of this part of Aristotle’s logic , and the correctness of his arguments , have been disputed since antiquity.
Although Aristotle did not develop a full theory of propositions in tenses other than the present, there is a famous passage in the De interpretatione that was influential in later developments in this area. In chapter 9 of that work, Aristotle discussed the assertion “There will be a sea battle tomorrow.” The discussion assumes that as of now the question is still unsettled. Although there are different interpretations of the passage, Aristotle seems there to have been maintaining that although now, before the fact, it is neither true nor false that there will be a sea battle tomorrow, nevertheless it is true even now, before the fact, that there either will or will not be a sea battle tomorrow. In short, Aristotle appears to have affirmed the law of excluded middle (for any proposition replacing “p,” it is true that either p or not-p), but to have denied the principle of bivalence (that every proposition is either true or false) in the case of future contingent propositions.
Aristotle’s logic presupposes several principles that he did not explicitly formulate about logical relations among between any propositions whatever, independent of the propositions’ internal analyses into categorical or any other form. For example, it presupposes that the principle “If p then q; but p; therefore q” (where p and q are replaced by any propositions) is valid. Such patterns of inference belong to what is called the logic of propositions. Aristotle’s logic is, by contrast, a logic of terms in the sense described above. A sustained study of the logic of propositions came only after Aristotle.
Aristotle’s approach to logic differs from the modern one in various ways. Perhaps the most general difference is that Aristotle did not consider verbs for being, such as einai, as ambiguous between the senses of identity (“Coriscus is Socrates”), predication (“Socrates is mortal”), existence (“Socrates is”), and subsumption (“Socrates is a man”), which in modern logic are expressed by means of different symbols or symbol combinations. In the Metaphysics, Aristotle wrote:
One man and a man are the same thing and existent man and a man are the same thing, and the doubling of words in “one man” and “one existent man” does not give any new meaning (it is clear that they are not separated either in coming to be or in ceasing to be); and similarly with “one.”
Aristotle’s refusal to recognize distinct senses of being led him into difficulties. In some cases the trouble lay in the fact that the verbs of different senses behave differently. Thus, whereas being in the sense of identity is always transitive, being in the sense of predication sometimes is not. If A is identical to B and B is identical to C, it follows that A is identical to C. But if Socrates is human and humanity is numerous, it does not follow that Socrates is numerous. In order to cope with these problems, Aristotle was forced to conclude that on different occasions some senses of einai may be absent, depending on the context. In a syllogistic premise, the context includes the two terms occurring in it. Thus, whether “every B is A” has the force “every B is an existent A” (or, “every B is an A and A exists”) depends on what A is and what can be known about it. Thus, existence was not a distinct predicate for Aristotle, though it could be part of the force of the predicate term.
In a chain of syllogisms, existential force, or the presumption of existence, flows “downward” from wider and more general terms to narrower ones. Hence, in any syllogistically organized science, it is necessary to assume the existence of only the widest term (the generic term) by which the field of the science is delineated. For all other terms of the science, existence can be proved syllogistically.
Aristotle’s treatment of existence illustrates the sense in which his logic is a logic of terms. Even existential force is carried not by the quantifiers alone but also, in the context of a syllogistically organized science, by the predicate terms contained in the syllogistic premises.
Another distinctive feature of Aristotle’s way of thinking about logical matters is that for him the typical sentences to which logical rules are supposed to apply are temporally indefinite. A sentence such as “Socrates is sitting,” for example, involves an implicit reference to the moment of utterance (“Socrates is now sitting”), so the same sentence can be both true at one moment and false at another, depending on what Socrates happens to be doing at the time in question. This variability in truth or falsehood is not found in sentences that make explicit reference to an absolute chronology, as does “Socrates is sitting at 12 noon on June 1, 400 BCE.”
Aristotle’s conception of logical sentences as temporally indefinite helps explain the intriguing discussion in chapter 9 of De interpretatione concerning whether true statements about the future—e.g., “There will be a sea battle tomorrow”—are necessarily true (because all events in the world are determined by a series of efficient causes). Aristotle’s answer has been interpreted in many ways, but the simplest interpretation is to take him to be saying that, understood as a temporally indefinite statement about the future, “there will be a sea battle tomorrow,” even if true at a certain time of utterance, is not necessary, because at some other time of utterance it might have been false. However, understood as a temporally definite statement—e.g., as equivalent to “there will be a sea battle on June 1, 400 BCE”—it is necessarily true if it is true at all, because the battle, like all events in the history of the universe, was causally determined to occur at that particular time. As Aristotle expressed the point, “What is, necessarily is when it is; but that is not to say that what is, necessarily is without qualification [haplos].”
Aristotle’s successor as head of his school at Athens was Theophrastus of Eresus (c. 371–c. 286 BC BCE). All Theophrastus’ Theophrastus’s logical writings are now lost, and much of what was said about his logical views by late ancient authors was attributed to both Theophrastus and his colleague Eudemus, so that it is difficult to isolate their respective contributions.
Theophrastus is reported to have added to the first figure of the syllogism the five moods that others later classified under a fourth figure. These moods were then called indirect moods of the first figure. In order to accommodate them, he had in effect to redefine the first figure as that in which the middle is the subject in one premise and the predicate in the other, not necessarily the subject in the major premise and the predicate in the minor, as Aristotle had it.
Theophrastus’ Theophrastus’s most significant departure from Aristotle’s doctrine occurred in modal syllogistic. He abandoned Aristotle’s notion of the possible as neither necessary nor impossible and adopted Aristotle’s alternative notion of the possible as simply what is not impossible. This allowed him to effect a considerable simplification in Aristotle’s modal theory. Thus, his conversion laws for modal categoricals were exact parallels to the corresponding laws for assertoric categoricals. In particular, for Theophrastus “problematic” universal negatives (“No β is possibly an α”) can be simply converted. Aristotle had denied this.
In addition, Theophrastus adopted a rule that the conclusion of a valid modal syllogism can be no stronger than its weakest premise. (Necessity is stronger than possibility, and an assertoric claim without any modal qualification is intermediate between the two). This rule simplifies modal syllogistic and eliminates several moods that Aristotle had accepted. Yet Theophrastus himself allowed certain modal moods that, combined with the principle of indirect proof (which he likewise accepted), yield results that perhaps violate this rule.
Theophrastus also developed a theory of inferences involving premises of the form “α is universally predicated of everything of which γ is universally predicated” and of related forms. Such propositions he called prosleptic propositions, and inferences involving them were termed prosleptic syllogisms. Greek proslepsis can mean “something taken in addition,” and Theophrastus claimed that propositions like these implicitly contain a third, indefinite term, in addition to the two definite terms (“α” and “γ” in the example).
The term prosleptic proposition appears to have originated with Theophrastus, although Aristotle discussed such propositions briefly in his Prior Analytics without exploring their logic in detail. The implicit third term in a prosleptic proposition Theophrastus called the middle. After an analogy with syllogistic for categorical propositions, he distinguished three “figures” for prosleptic propositions and syllogisms, based on the basis of the position of the implicit middle. The prosleptic proposition “α is universally predicated of everything that is universally predicated of γ” belongs to the first figure and can be a premise in a first-figure prosleptic syllogism. “Everything predicated universally of α is predicated universally of γ” belongs to the second figure and can be a premise in a second-figure syllogism, and so too “α is universally predicated of everything of which γ is universally predicated” for the third figure. Thus, for example, the following is a prosleptic syllogism in the third figure: “α is universally affirmed of everything of which γ is universally affirmed; γ is universally affirmed of β; therefore, α is universally affirmed of β.”
Theophrastus observed that certain prosleptic propositions are equivalent to categoricals and differ from them only “potentially” or “verbally.” Some late ancient authors claimed that this made prosleptic syllogisms superfluous. But in fact not all prosleptic propositions are equivalent to categoricals.
Theophrastus is also credited with investigations into hypothetical syllogisms. A hypothetical proposition, for Theophrastus , is a proposition made up of two or more component propositions (e.g., “p or q,” or “if p then q”), and a hypothetical syllogism is an inference containing at least one hypothetical proposition as a premise. The extent of Theophrastus’ Theophrastus’s work in this area is uncertain, but it appears that he investigated a class of inferences called totally hypothetical syllogisms, in which both premises and the conclusion are conditionals. This class would include, for example, syllogisms such as “If α then β; if β than γ; therefore, if α then γ,” or “if α then β; if not α then γ, therefore, if not β then γ.” As with his prosleptic syllogisms, Theophrastus divided these totally hypothetical syllogisms into three “figures,” after an analogy with categorical syllogistic.
Theophrastus was the first person in the history of logic known to have examined the logic of propositions seriously. Still, there was no sustained investigation in this area until the period of the Stoics.
Throughout the ancient world, the logic of Aristotle and his followers was one main stream. But there was also a second tradition of logic, that of the Megarians and the Stoics.
The Megarians were followers of Euclid (or Euclides) of Megara (c. 430–c. 360 BC BCE), a pupil of Socrates. In logic the most important Megarians were Diodorus Cronus (4th century BC BCE) and his pupil Philo of Megara. The Stoics were followers of Zeno of Citium (c. 336–c. 265 BC BCE). By far the most important Stoic logician was Chrysippus (c. 279–206 BC BCE). The influence of Megarian on Stoic logic is indisputable, but many details are uncertain, since all but fragments of the writings of both groups are lost.
The Megarians were interested in logical puzzles. Many paradoxes have been attributed to them, including the “liar paradox” (someone says that he is lying; is his statement true or false?), the discovery of which has sometimes been credited to Eubulides of Miletus, a pupil of Euclid of Megara. The Megarians also discussed how to define various modal notions and debated the interpretation of conditional propositions.
Diodorus Cronus originated a mysterious argument called the Master Argument. It claimed that the following three propositions are jointly inconsistent, so that at least one of them is false:Everything true about the past is now necessary. (That is, the past is now settled, and there is nothing to be done about it.)The impossible does not follow from the possible.There is something that is possible, and yet neither is nor will be true. (That is, there are possibilities that will never be realized.)
It is unclear exactly what inconsistency Diodorus saw among these propositions. Whatever it was, Diodorus was unwilling to give up 1 or 2 , and so rejected 3. That is, he accepted the opposite of 3, namely: Whatever is possible either is or will be true. In short, there are no possibilities that are not realized now or in the future. It has been suggested that the Master Argument was directed against Aristotle’s discussion of the sea battle tomorrow in the De interpretatione.
Diodorus also proposed an interpretation of conditional propositions. He held that the proposition “If p, then q” is true if and only if it neither is nor ever was possible for the antecedent p to be true and the consequent q to be false simultaneously. Given Diodorus’ Diodorus’s notion of possibility, this means that a true conditional is one that at no time (past, present, or future) has a true antecedent and a false consequent. Thus, for Diodorus a conditional does not change its truth value; if it is ever true, it is always true. But Philo of Megara had a different interpretation. For him, a conditional is true if and only if it does not now have a true antecedent and a false consequent. This is exactly the modern notion of material implication. In Philo’s view, unlike Diodorus’Diodorus’s, conditionals may change their truth value over time.
These and other theories of modality and conditionals were discussed not only by the Megarians but by the Stoics as well. Stoic logicians, like the Megarians, were not especially interested in scientific demonstration in Aristotle’s special sense. They were more concerned with logical issues arising from debate and disputation: fallacies, paradoxes, forms of refutation. Aristotle had also written about such things, but his interests gradually shifted to his special notion of science. The Stoics kept their interest focused on disputation and developed their studies in this area to a high degree.
Unlike the Aristotelians, the Stoics developed propositional logic to the neglect of term logic. They did not produce a system of logical laws arising from the internal structure of simple propositions, as Aristotle had done with his account of opposition, conversion, and syllogistic for categorical propositions. Instead, they concentrated on inferences from hypothetical propositions as premises. Theophrastus had already taken some steps in this area, but his work had little influence on the Stoics.
Stoic logicians studied the logical properties and defining features of words used to combine simpler propositions into more complex ones. In addition to the conditional, which had already been explored by the Megarians, they investigated disjunction (“or”or) and conjunction (“and”and), along with words like “since” and “because.” such as since and because. Some of these they defined truth-functionally (i.e., solely in terms of the truth or falsehood of the propositions they combined). For example, they defined a disjunction as true if and only if exactly one disjunct is true (the modern “exclusive” disjunction). They also knew “inclusive” disjunction (defined as true when at least one disjunct is true), but this was not widely used. More important, the Stoics seem to have been the first to show how some of these truth-functional words may be defined in terms of others.
Unlike Aristotle, who typically formulated his syllogisms as conditional propositions, the Stoics regularly presented principles of logical inference in the form of schematic arguments. While Aristotle had used Greek letters as variables replacing terms, the Stoics used ordinal numerals as variables replacing whole propositions. Thus: “Either the first or the second; but not the second; therefore, the first.” Here the expressions “the first” and “the second” are variables or placeholders for propositions, not terms.
Chrysippus regarded five valid inference schemata as basic or indemonstrable. They are:If the first, then the second; but the first; therefore, the second.If the first, then the second; but not the second; therefore, not the first.Not both the first and the second; but the first; therefore, not the second.Either the first or the second; but the first; therefore, not the second.Either the first or the second; but not the second; therefore, the first.
Using these five “indemonstrables,” Chrysippus proved the validity of many further inference schemata. Indeed, the Stoics claimed (falsely, it seems) that all valid inference schemata could be derived from the five indemonstrables.
The differences between Aristotelian and Stoic logic were ones of emphasis, not substantive theoretical disagreements. At the time, however, it appeared otherwise. Perhaps because of their real disputes in other areas, Aristotelians and Stoics at first saw themselves as holding incompatible theories in logic as well. But by the late 1st century BC BCE, an eclectic movement had begun to weaken these hostilities. Thereafter, the two traditions were combined in commentaries and handbooks for general education.
After Chrysippus, little important logical work was done in Greek. But the commentaries and handbooks that were written did serve to consolidate the previous traditions and in some cases are the only extant sources for the doctrines of earlier writers. Among late authors, Galen the physician (AD 129–c. 199 CE) wrote several commentaries, now lost, and an extant Introduction to Dialectic. Galen observed that the study of mathematics and logic was important to a medical education, a view that had considerable influence in the later history of logic, particularly in the Arab world. Tradition has credited Galen with “discovering” the fourth figure of the Aristotelian syllogism, although in fact he explicitly rejected it.
Alexander of Aphrodisias (fl. c. AD 200 CE) wrote extremely important commentaries on Aristotle’s writings, including the logical works. Other important commentators include Porphyry of Tyre (c. 232–before 306), Ammonius Hermeiou (5th century), Simplicius (6th century), and John Philoponus (6th century). Sextus Empiricus (late 2nd–early 3rd centuriescentury) and Diogenes Laertius Laërtius (probably early 3rd century) are also important sources for earlier writers. Significant contributions to logic were not made again in Europe until the 12th century.
As the Greco-Roman world disintegrated and gave way to the Middle Ages, knowledge of Greek declined in the West. Nevertheless, several authors served as transmitters of Greek learning to the Latin world. Among the earliest of them, Cicero (106–43 BC BCE) introduced Latin translations for technical Greek terms. Although his translations were not always finally adopted by later authors, he did make it possible to discuss logic in a language that had not previously had any precise vocabulary for it. In addition, he preserved much information about the Stoics. In the 2nd century AD CE Lucius Apuleius passed on some knowledge of Greek logic in his De philosophia rationali (“On Rational Philosophy”).
In the 4th century Marius Victorinus produced Latin translations of Aristotle’s Categories and De interpretatione and of Porphyry of Tyre’s Isagoge (“Introduction,” on Aristotle’s Categories), although these translations were not very influential. He also wrote logical treatises of his own. A short De dialectica (“On Dialectic”), doubtfully attributed to St. Augustine (354–430), shows evidence of Stoic influence, although it had little influence of its own. The pseudo-Augustinian Decem categoriae (“Ten Categories”) is a late 4th-century Latin paraphrase of a Greek compendium of the Categories. In the late 5th century Martianus Capella’s allegorical De nuptiis Philologiae et Mercurii (The Marriage of Philology and Mercury) contains “On the Art of Dialectic” as book IV.
The first truly important figure in medieval logic was Boethius (480–524/525). Like Victorinus, he translated Aristotle’s Categories and De interpretatione and Porphyry’s Isagoge, but his translations were much more influential. He also seems to have translated the rest of Aristotle’s Organon, except for the Posterior Analytics, but the history of those translations and their circulation in Europe is much more complicated; they did not come into widespread use until the first half of the 12th century. In addition, Boethius wrote commentaries and other logical works that were of tremendous importance throughout the Latin Middle Ages. Until the 12th century his writings and translations were the main sources for medieval Europe’s knowledge of logic. In the 12th century they were known collectively as the Logica vetus (“Old Logic”).
Between the time of the Stoics and the revival of logic in 12th-century Europe, the most important logical work was done in the Arab world. Arabic interest in logic lasted from the 9th to the 16th century, although the most important writings were done well before 1300.
Syrian Christian authors in the late 8th century were among the first to introduce Alexandrian scholarship to the Arab world. Through Galen’s influence, these authors regarded logic as important to the study of medicine. (This link with medicine continued throughout the history of Arabic logic and, to some extent, later in medieval Europe.) By about 850, at least Porphyry’s Isagoge and Aristotle’s Categories, De interpretatione, and Prior Analytics had been translated via Syriac into Arabic. Between 830 and 870 the philosopher and scientist al-Kindī (c. 805–873) produced in Baghdad what seem to have been the first Arabic writings on logic that were not translations. But these writings, now lost, were probably mere summaries of others’ work.
By the late 9th century, the school of Baghdad was the focus of logic studies in the Arab world. Most of the members of this school were Nestorian or Jacobite Christians, but the Muslim al-Fārābī (c. 873–950) wrote important commentaries and other logical works there that influenced all later Arabic logicians. Many of these writings are now lost, but among the topics al-Fārābī discussed were future contingents (in the context of Aristotle’s De interpretatione, chapter 9), the number and relation of the categories, the relation between logic and grammar, and non-Aristotelian forms of inference. This last topic showed the influence of the Stoics. Al-Fārābī, along with Avicenna and AverroesAverroës, was among the best logicians the Arab world produced.
By 1050 the school of Baghdad had declined. The 11th century saw very few Arabic logicians, with one distinguished exception: the Persian Ibn Sīnā, or Avicenna (980–1037), perhaps the most original and important of all Arabic logicians. Avicenna abandoned the practice of writing on logic in commentaries on the works of Aristotle and instead produced independent treatises. He sharply criticized the school of Baghdad for what he regarded as their slavish devotion to Aristotle. Among the topics Avicenna investigated were quantification of the predicates of categorical propositions, the theory of definition and classification, and an original theory of “temporally modalized” syllogistic, in which premises include such modifiers as “at all times,” “at most times,” and “at some time.”
The Persian mystic and theologian al-Ghazālī, or Algazel , (1058–1111), followed Avicenna’s logic, although he differed sharply from Avicenna in other areas. Al-Ghazālī was not a significant logician but is important nonetheless because of his influential defense of the use of logic in theology.
In the 12th century the most important Arab logician was Ibn Rushd, or Averroes Averroës (1126–98). Unlike the Persian followers of Avicenna, Averröes worked in Moorish Spain, where he revived the tradition of al-Fārābī and the school of Baghdad by writing penetrating commentaries on Aristotle’s works, including the logical ones. Such was the stature of these excellent commentaries that, when they were translated into Latin in the 1220s or 1230s, Averroes Averroës was often referred to simply as “the Commentator.”
After AverroesAverroës, logic declined in western Islām because of the antagonism felt to exist between logic and philosophy on the one hand and Muslim orthodoxy on the other. But in eastern Islām, owing in part to because of the work of al-Ghazālī, logic was not regarded as being so closely linked with philosophy. Instead, it was viewed as a tool that could be profitably used in any field of study, even (as al-Ghazālī had done) on behalf of theology against the philosophers. Thus, the logical tradition continued in Persia long after it died out in Spain. The 13th century produced a large number of logical writings, but these were mostly unoriginal textbooks and handbooks. After about 1300, logical study was reduced to producing commentaries on these earlier, already derivative handbooks.
Except in the Arabic world, there was little activity in logic between the time of Boethius and the 12th century. Certainly Byzantium produced nothing of note. In Latin Europe there were a few authors, including Alcuin of York (c. 730–804) and Garland the Computist (fl. flourished c. 1040). But it was not until late in the 11th century that serious interest in logic revived. St. Anselm of Canterbury (1033–1109) discussed semantical questions in his De grammatico, and investigated the notions of possibility and necessity in surviving fragments, but these texts did not have much influence. More important was Anselm’s general method of using logical techniques in theology. His example set the tone for much that was to follow.
The first important Latin logician after Boethius was Peter Abelard (1079–1142). He wrote three sets of commentaries and glosses on Porphyry’s Isagoge and Aristotle’s Categories and De interpretatione; these were the Introductiones parvulorum (also containing glosses on some writings of Boethius), Logica “Ingredientibus,” and Logica “Nostrorum petitioni sociorum” (on the Isagoge only), together with the independent treatise Dialectica (extant in part). These works show a familiarity with Boethius but go far beyond him. Among the topics discussed insightfully by Abelard are the role of the copula in categorical propositions, the effects of different positions of the negation sign in categorical propositions, modal notions like such as “possibility,” future contingents (as treated, for example, in chapter 9 of Aristotle’s De interpretatione), and conditional propositions or “consequences.”
Abelard’s fertile investigations raised logical study in medieval Europe to a new level. His achievement is all the more remarkable, since the sources at his disposal were the same ones that had been available in Europe for the preceding 600 years: Aristotle’s Categories and De interpretatione and Porphyry’s Isagoge, together with the commentaries and independent treatises by Boethius.
Even in Abelard’s lifetime, however, things were changing. After about 1120, Boethius’ Boethius’s translations of Aristotle’s Prior Analytics, Topics, and Sophistic Refutations began to circulate. Sometime in the second quarter of the 12th century, James of Venice translated the Posterior Analytics from Greek, which thus making made the whole of the Organon available in Latin. These newly available Aristotelian works were known collectively as the Logica nova (“New Logic”). In a flurry of activity, others in the 12th and 13th centuries produced additional translations of these works and of Greek and Arabic commentaries on them, along with many other philosophical writings and other works from Greek and Arabic sources.
The Sophistic Refutations proved an important catalyst in the development of medieval logic. It is a little catalog of fallacies, how to avoid them, and how to trap others into committing them. The work is very sketchy. Many kinds of fallacies are not discussed, and those that are could have been treated differently. Unlike the Posterior Analytics, the Sophistic Refutations was relatively easy to understand. And unlike the Prior Analytics—where, except for modal syllogistic, Aristotle had left little to be done—there was obviously still much to be investigated about fallacies. Moreover, the discovery of fallacies was especially important in theology, particularly in the doctrines of the Trinity and the Incarnation. In short, the Sophistic Refutations was tailor-made to exercise the logical ingenuity of the 12th century. And that is exactly what happened.
The Sophistic Refutations, and the study of fallacy it generated, produced an entirely new logical literature. A genre of sophismata (“sophistical”) treatises developed that investigated fallacies in theology, physics, and logic. The theory of “supposition” (see below The theory of supposition) also developed out of the study of fallacies. Whole new kinds of treatises were written on what were called “the properties of terms,” semantic properties important in the study of fallacy. In addition, a new genre of logical writings developed on the topic of “syncategoremata”—expressions such as “only”“only,” “inasmuch as,” “besides,” “except,” “lest,” and so on, which posed quite different logical problems than did the terms and logical particles in traditional categorical propositions or in the simpler kind of “hypothetical” propositions inherited from the Stoics. The study of valid inference generated a literature on “consequences” that went into far more detail than any previous studies. By the late 12th or early 13th century, special treatises were devoted to insolubilia (semantic paradoxes such as the liar paradox, “This sentence is false”) and to a kind of disputation called “obligationes,” the exact purpose of which is still in question.
All these treatises, and the logic contained in them, constitute the peculiarly medieval contribution to logic. It is primarily on these topics that medieval logicians exercised their best ingenuity. Such treatises, and their logic, were called the Logica moderna (“Modern Logic”), or “terminist” logic, because they laid so much emphasis on the “properties of terms.” These developments began in the mid-12th century , and continued to the end of the Middle Ages.
In the 13th century the sophismata literature continued and deepened. In addition, several authors produced summary works that surveyed the whole field of logic, including the “Old” and “New” logic as well as the new developments in the Logica moderna. These compendia are often called “summulae” (“little summaries”), and their authors “summulists.” Among the most important of the summulists are: (1) Peter of Spain (also known as Petrus Hispanus; later Pope John XXI), who wrote a Tractatus more commonly known as Summulae logicales (“Little Summaries of Logic”) probably in the early 1230s; it was used as a textbook in some late medieval universities; (2) Lambert of Auxerre, who wrote a Logica sometime between 1253 and 1257; and (3) William of Sherwood, who produced Introductiones in logicam (Introduction to Logic) and other logical works sometime about the mid-century.
Despite his significance in other fields, Thomas Aquinas is of little importance in the history of logic. He did write a treatise on modal propositions and another one on fallacies. But there is nothing especially original in these works; they are early writings and are confined to passing on received doctrine. He also wrote an incomplete commentary on the De interpretatione, but it is of no great logical significance.
About the end of the 13th century, John Duns Scotus (c. 1266–1308) composed several works on logic. There also are some very interesting logical texts from the same period that have been falsely attributed to Scotus and were published in the 17th century among his authentic works. These are now referred to as the works of “the Pseudo-Scotus,” although they may not all be by the same author.
The first half of the 14th century saw the high point of medieval logic. Much of the best work was done by people associated with the University of Oxford. Among them were William of Ockham (c. 1285–1347), the author of an important Summa logicae (“Summary of Logic”) and other logical writings. Perhaps because of his importance in other areas of medieval thought, Ockham’s originality in logic has sometimes been exaggerated. But there is no doubt that he was one of the most important logicians of the century. Another Oxford logician was Walter Burley (or Burleigh), an older contemporary of Ockham. Burley was a bitter opponent of Ockham in metaphysics. He wrote a work De puritate artis logicae (“On the Purity of the Art of Logic”; in two versions), apparently in response and opposition to Ockham’s views, although on some points Ockham simply copied Burley almost verbatim.
Slightly later, on the Continent, Jean Buridan was a very important logician at the University of Paris. He wrote mainly during the 1330s and ’40s. In many areas of logic and philosophy, his views were close to Ockham’s, although the extent of Ockham’s influence on Buridan is not clear. Buridan’s Summulae de dialectica (“Little Summaries of Dialectic”), intended for instructional use at Paris, was largely an adaptation of Peter of Spain’s Summulae logicales. He appears to have been the first to use Peter of Spain’s text in this way. Originally meant as the last treatise of his Summulae de dialectica, Buridan’s extremely interesting Sophismata (published separately in early editions) discusses many issues in semantics and philosophy of logic. Among Buridan’s pupils was Albert of Saxony (d. died 1390), the author of a Perutilis logica (“A Very Useful Logic”) and later first rector of the University of Vienna. Albert was not an especially original logician, although his influence was by no means negligible.
Many of the characteristically medieval logical doctrines in the Logica moderna centred around on the notion of “supposition” (suppositio). Already by the late 12th century, the theory of supposition had begun to form. In the 13th century, special treatises on the topic multiplied. The summulists all discussed it at length. Then, after about 1270, relatively little was heard about it. In France, supposition theory was replaced by a theory of “speculative grammar” or “modism” (so called because it appealed to “modes of signifying”). Modism was not so popular in England, but there too the theory of supposition was largely neglected in the late 13th century. In the early 14th century, the theory reemerged both in England and on the Continent. Burley wrote a treatise on the topic in about 1302, and Buridan revived the theory in France in the 1320s. Thereafter the theory remained the main vehicle for semantic analysis until the end of the Middle Ages.
Supposition theory, at least in its 14th-century form, is best viewed as two theories under one name. The first, sometimes called the theory of “supposition proper,” is a theory of reference and answers the question “To what does a given occurrence of a term refer in a given proposition?” In general (the details depend on the author), three main types of supposition were distinguished: (1) personal supposition (which, despite the name, need not have anything to do with persons), (2) simple supposition, and (3) material supposition. These types are illustrated, respectively, by the occurrences of the term horse in the statements “Every horse is an animal” (in which the term horse refers to individual horses), “Horse is a species” (in which the term refers to a universal), and “Horse is a monosyllable” (in which it refers to the spoken or written word). The theory was elaborated and refined by considering how reference may be broadened by tense and modal factors (for example, the term horse in “Every horse will die,” which may refer to future as well as present horses) or narrowed by adjectives or other factors (for example, horse in “Every horse in the race is less than two years old”).
The second part of supposition theory applies only to terms in personal supposition. It divides personal supposition into several types, including (again the details vary according to the author): (1) determinate (e.g., horse in “Some horse is running”), (2) confused and distributive (e.g., horse in “Every horse is an animal”), and (3) merely confused (e.g., animal in “Every horse is an animal”). These types were described in terms of a notion of “descent to (or ascent from) singulars.” For example, in the statement “Every horse is an animal,” one can “descend” under the term “horse” horse to: “This horse is an animal, and that horse is an animal, and so on,” but one cannot validly “ascend” from “This horse is an animal” to the original proposition. There are many refinements and complications.
The purpose of this second part of the theory of supposition has been disputed. Since the question of what it is to which a given occurrence of a term refers is already answered in the first part of supposition theory, the purpose of this second part must have been different. The main suggestions are (1) that it was devised to help detect and diagnose fallacies, (2) that it was intended as a theory of truth conditions for propositions or as a theory of analyzing the senses of propositions, and (3) that, like the first half of supposition theory, it originated as part of an account of reference, but, once its theoretical insufficiency for that task was recognized, it was gradually divorced from that first part of supposition theory and by the early 14th century was left as a conservative vestige that continued to be disputed but no longer had any question of its own to answer. There are difficulties with all of these suggestions. The theory of supposition survived beyond the Middle Ages and was frequently applied not only in logical discussions but also in theology and in the natural sciences.
In addition to supposition and its satellite theories, several logicians during the 14th century developed a sophisticated theory of “connotation” (connotatio or appellatio; in which the term black, for instance, not only refers to black things but also “connotes” the quality, blackness, that they possess) and a subtle theory of “mental language,” in which tools of semantic analysis were applied to epistemology and the philosophy of mind. Important treatises on insolubilia and obligationes, as well as on the theory of consequence or inference, continued to be produced in the 14th century, although the main developments there were completed by mid-century.
Medieval logicians continued the tradition of modal syllogistic inherited from Aristotle. In addition, modal factors were incorporated into the theory of supposition. But the most important developments in modal logic occurred in three other contexts: (1) whether propositions about future contingent events are now true or false (Aristotle had raised this question in De interpretatione, chapter 9), (2) whether a future contingent event can be known in advance, and (3) whether God (who, the tradition says, cannot be acted upon causally) can know future contingent events. All these issues link logical modality with time. Thus, Peter Aureoli (c. 1280–1322) held that if something is in fact ϕ (‘ϕ’ “ϕ” is some predicate) but can be not-ϕ, then it is capable of changing from being ϕ to being not-ϕ.
Duns Scotus in the late 13th century was the first to sever the link between time and modality. He proposed a notion of possibility that was not linked with time but based purely on the notion of semantic consistency. This radically new conception had a tremendous influence on later generations down to the 20th century. Shortly afterward, Ockham developed an influential theory of modality and time that reconciles the claim that every proposition is either true or false with the claim that certain propositions about the future are genuinely contingent.
Most of the main developments in medieval logic were in place by the mid-14th century. On the Continent, the disciples of Jean Buridan—Albert of Saxony (c. 1316–90), Marsilius of Inghen (d. died 1399), and others—continued and developed the work of their predecessors. In 1372 Pierre d’Ailly wrote an important work, Conceptus et insolubilia (Concepts and Insolubles), which appealed to a sophisticated theory of mental language in order to solve semantic paradoxes such as the liar paradox.
In England the second half of the 14th century produced several logicians who consolidated and elaborated earlier developments. Their work was not very original, although it was often extremely subtle. Many authors during this period compiled brief summaries of logical topics intended as textbooks. The doctrine in these little summaries is remarkably uniform, making which makes it difficult to determine who their authors were. By the early 15th century, informal collections of these treatises had been gathered under the title Libelli sophistarum (“Little Books for Arguers”)—one collection for Oxford and a second for Cambridge; both were printed in early editions. Among the notable logicians of this period are Henry Hopton (fl. flourished 1357), John Wycliffe (c. 1330–84), Richard Lavenham (d. died after 1399), Ralph Strode (fl. flourished c. 1360), Richard Ferrybridge (or Feribrigge; fl. flourished c. 1360s), and John Venator (also known as John Huntman or Hunter; fl. flourished 1373).
Beginning in 1390, the Italian Paul of Venice studied for at least three years at Oxford and then returned to teach at Padua and elsewhere in Italy. Although English logic was studied in Italy even before Paul’s return, his own writings advanced this study greatly. Among Paul’s logical works were the very popular Logica parva (“Little Logic”), printed in several early editions, and possibly the huge Logica magna (“Big Logic”) that has sometimes been regarded as a kind of encyclopaedia of the whole of medieval logic.
After about 1400, serious logical study was dead in England. However, it continued to be pursued on the Continent until the end of the Middle Ages and afterward. |
What is a solar inverter?
A solar inverter has the important task of converting the direct current generated in the solar modules into alternating current and making it usable for the public power grid. It is therefore an indispensable part of the photovoltaic system: the operation of electronic devices with solar energy is only possible by means of current conversion.
Inverters have an input side (for direct current, abbreviation: DC), which has one or more DC controllers with MPP tracker that are controlled by a microprocessor. In the next stage, the actual transformation of the energy takes place in the form of alternating current (abbreviation: AC), which is subsequently passed on to the output side. From there, it is stored in the power grid. Inverters are available in various versions. Nowadays, there are models with or without a transformer. However, modern inverters often have no transformer and thus have a higher efficiency. In addition, a distinction is made between three different inverters for photovoltaic plants. These are briefly described and explained below in more detail.
Solar inverters: models and versions
Inverters for photovoltaic plants must meet a number of requirements in order to profit economically in the long term. Modern models are quickly and flexibly adapted to the amount of solar energy generated, e.g. variations in cloud cover and weather changes. The solar inverter should achieve its highest efficiency both at high and low input voltages. Depending on the size and location of the photovoltaic system, the respective inverters therefore have different characteristics. The inverter should always be matched to the photovoltaic system in a manner appropriate to the needs of the customer.
This type of solar inverter is connected directly to the respective solar module. Each solar module thus has its own inverter. Module inverters are used mainly for small solar systems or for solar systems with different orientations of the solar modules.
Several solar modules are interconnected in series and form a strand (or string). A string inverter thus connects a whole series of connected photovoltaic modules to the public power grid. This form is now widely used as it offers a wide range of applications and a good price / performance ratio. String inverters are suitable for small household systems as well as for large free-standing systems.
As a rule, a central inverter is only used for large and very large photovoltaic plants. A large number of solar modules can be connected to it. Central inverters have a high efficiency and are therefore particularly efficient. The disadvantage: if a fault occurs, the operation of all solar modules is affected and large losses of output occur.
Solar inverters and MPP tracking
If you are looking for a powerful solar inverter, do not miss out on the MPP tracking. MPP stands for "Maximum PowerPoint" and indicates the point at which a solar module performs its highest performance. This depends on the solar radiation, the temperature and the individual properties of the solar module. MPP tracking is therefore a matter of continually recording the performance of the photovoltaic module and adapting it to the respective circumstances. The MPP Tracker ensures that the maximum amount of energy is always produced. It is controlled by a microcontroller, in which a certain setpoint is pre-determined. Due to the importance of MPP tracking for photovoltaic systems, modern solar inverters often have more than one MPP tracker.
The solar inverter and its efficiency
The efficiency of the solar inverter is of central importance for a photovoltaic system. It provides information about the maximum energy conversion of the entire plant and decides decisively on its yield and the associated profitability. The level of efficiency is dependent upon many factors, e.g. the solar radiation, and the location of the PV system and the system configuration. The solar inverter therefore does not always provide its full power and the same device can have a different efficiency at different locations. In order to make the equipment comparable, the "European Efficiency" was introduced some years ago. This represents an average of the efficiencies at different partial capacities (5, 10, 20, 30, 50 and 100 per cent of the maximum power). The weighting of this value takes place taking into account average European temperature and weather fluctuations.
Where should the solar inverter be connected?
Several factors play a role in the choice of the correct placement of the solar inverter. In principle, it should be protected from wind and weather, which is why installation is in the interior of the home, e.g. in the basement or the garage. And even then there are further specifications: Among other things, the ambient temperature is crucial for the installation of the solar inverter. In the case of the conversion from direct current to alternating current, losses occur which are emitted in the form of heat to the environment. In warm rooms, the heat cannot be dissipated correctly through the air. The high temperatures influence the life of the device and its electrical components. Close proximity to the ceiling or other inverters can also limit heat dissipation. Fixed minimum distances should ensure safe operation of the equipment. Another factor that should be taken into account during assembly is the noise level generated during the transformation. Although modern solar inverters usually operate very quietly, there may be occasional buzzing or clicking noises at high power. For this reason it is advisable not to install the unit in the immediate vicinity of the living rooms. Last but not least, the distance to the feed-in counter is of particular importance in the case of string inverters. In principle, installation of an inverter should only be carried out by people with professional knowledge. |
In your inclusive classroom, you probably have some students who are struggling with reading skills—and having trouble generalizing the skills they learn to settings outside the classroom. Whether they’re English language learners, students with identified learning disabilities, or learners who just need extra help, the students in your class will need instruction that addresses their specific needs and supports their long-term success.
In their book Effective Instruction for Middle School Students with Reading Difficulties, Carolyn Denton et al. outline eight key lesson components that make a difference for struggling readers. Today’s post presents and defines these eight essentials. See which ones you’re already doing, and which steps you might need to strengthen.
State your objective (and stick to it). What do you want your students to learn? State the goal up front and then teach the class with that goal in mind. Give your students a step-by-step presentation of the new information you want them to learn. Teach only a few new ideas at once to ensure that struggling readers don’t fall behind, and connect the new material with your students’ prior knowledge.
Start with a daily review. Each day, quickly review the material you covered the day before. Not only will this give you a chance to see if all students have mastered the material, it will also give students a chance to “overlearn”—that is, learn to the point of automaticity, to ensure that they retain the material long-term. When you have your daily review, use visuals and teach explicitly so there’s no doubt about what you want students to recall from the previous lesson.
Explicitly model and teach. Struggling learners—and all students, really—learn better when you show them what you want them to do. When you teach a new skill or strategy, model or demonstrate it form them clearly. You can model a strategy effectively through a “think-aloud” process: Demonstrate each step of the strategy for your class (visuals are especially important for struggling readers!) while talking through your own thought processes out loud. Make sure your English language learners and students with limited oral vocabularies are closely watching you as you model, not trying to write at the same time. [Note: The sample lesson plans in Effective Instruction for Middle School Students with Reading Difficulties give you many examples of think-aloud modeling.]
Give guided practice. Your students should have many opportunities to demonstrate what they learn (with guidance from you). Walk around and give guidance while students work on assignments—don’t wait until they finish their work to check for accuracy. Giving helpful hints and clearing up misconceptions during classwork can help prevent mistakes from becoming bad habits that students will struggle with for years to come.
Before you move on to independent practice, make sure that all students have ample time—and scaffolding supports if needed—to understand new concepts. Some students might need multiple opportunities to practice with guidance from you before they can apply a new skill independently.
Set aside time for independent practice. Are your students consistently applying a skill or strategy correctly during guided practice? That means they’re ready to apply their knowledge independently. Giving students time for independent practice will reinforce the concepts you teach, help students learn new information on their own, and develop students’ automaticity, or mastery of a skill.
Teach for generalization. Once a new skill becomes a habit through independent practice, it will be easier for students to generalize their new knowledge to other contexts or settings. It’s important to note that struggling readers usually have trouble generalizing automatically. To support their generalization:
- Tell students explicitly that they should be applying their new reading skills and strategies outside the classroom. Ask questions like “Can you think of a time you might use this strategy outside of this class?”
- Plan instruction so your students have plenty of time to practice applying their new skills to a variety of texts.
- Include texts similar to the ones your students are reading in their language arts, math, social studies, and science classes.
- Lead class discussions that encourage students to verbalize ways they can generalize the strategies they’ve learned.
Monitor student learning. Are your students making progress toward the goals you stated up front? Keep track of their learning by using assessments to gather information regularly. The data you collect should be directly connected to each student’s specific instructional focus. If a student’s diagnostic assessments indicate that he needs to work on fluency or word recognition, then you should monitor that student’s growth through repeated assessments of oral reading fluency or word list reading. Establish a routine of regular progress monitoring, and use the data you gather to figure out when you need to reteach concepts or adapt your instruction.
Conduct periodic review. Quick daily reviews of what you taught the day before are important, but to ensure long-term retention of material, plan weekly and monthly cumulative reviews of key strategies and skills. Help students connect what they’ve learn in previous lessons to the information they’re learning in each new unit of study.
To learn more about each component covered in today’s post—and get more than 20 step-by-step sample lessons for strengthening fluency, comprehension, word recognition, and vocabulary—see Effective Instruction for Middle School Students with Reading Difficulties.
*This post was adapted from Chapter 5 of Effective Instruction for Middle School Students with Reading Difficulties, by Carolyn A. Denton, Ph.D., Sharon Vaughn, Ph.D., Jade Wexler, Ph.D., Deanna Bryan, & Deborah Reed, Ph.D. The chapter incorporates the research of:
Mastropieri, M.A., & Scruggs, T.E. (2002). Effective instruction for special education. Austin, TX: ProEd.
Swanson, H.L., & Deshler, D. (2003). Instructing adolescents with learning disabilities: Converting a metaanalysis to practice. Journal of Learning, 36, 124–135. |
The House That Jane Built: A Story about Jane Addams by Tanya Lee Stone, illustrated by Kathryn Brown
Jane Addams was a girl born into comfort and wealth, but even as a child she noticed that not everyone lived like that. In a time when most women were not educated, Addams went to Seminary. When traveling with her friends in Europe she saw real poverty and then also saw a unique solution in London that she brought home with her. In Chicago, she started the first settlement house, a huge house that worked to help the poor right in the most destitute part of town. Hull House helped the poor find jobs and offered them resources. Addams also created a public bath which helped convince the city that more public baths were needed. She also found a way to have children play safely by creating the first public playground. Children were often home alone as their parents worked long hours, so she created before and after school programs for them to attend and even had evening classes for older students who had to work during the day. By the 1920s, Hull House as serving 9000 people a week! It had grown to several buildings and was the precursor to community centers.
Jane Addams was a remarkable woman. While this picture book biography looks specifically at Hull House, she also was active in the peace movement and labeled by the FBI as “the most dangerous woman in America.” In 1931, she became the first American woman to win the Nobel Peace Prize. She wrote hundreds of articles and eleven books, she worked for women’s suffrage, and was a founding member of both the ACLU and the NAACP. At the turn of the century she was one of the most famous women in the world. The beauty of her story is that she saw a need and met it with her own tenacity and resources. She asked others to contribute, but did not step back and just fund the efforts, instead keeping on working and living right in that part of Chicago. Her story is a message of hope and a tale of a life well lived in service to others.
Brown’s illustrations depict the neighborhood around Hull House in all of its gritty color. Laundry flies in the breeze, litter fills the alleys, and children are in patched clothes and often barefoot. Through both the illustrations and the text, readers will see the kindness of Jane Addams shining on the page. Her gentleness shows as does her determination to make a difference.
This biography is a glimpse of an incredible woman whose legacy lives on in the United States and will serve as inspiration for those children looking to make a difference in the world around them. Appropriate for ages 6-9.
Reviewed from copy received from Henry Holt and Co. |
March 3, 2012 | 2
BOSTON—At long last, one of the hairiest problems in modern physics has been solved. Researchers have devised a theoretical model to describe the shape of a ponytail.
A ponytail may look like a relatively simple object, but in truth it is a bundle of physical complexity. Multiple forces are in play. Each hair is elastic, with a random intrinsic curvature. And the average head of hair has 50,000 to 100,000 individual strands, according to Raymond Goldstein, a professor in the University of Cambridge’s department of applied mathematics and theoretical physics.
Goldstein presented his ponytail research here at this week’s meeting of the American Physical Society, and he and his colleagues at Unilever and the University of Warwick in the U.K. published a paper on their findings February 13 in Physical Review Letters.
Some of the major forces conspiring to shape a ponytail are elasticity, gravity, tension and pressure. That last property, Goldstein and his colleagues found, comes from the curvatures of individual fibers, from which the physicists derived a so-called equation of state for hair. A similar concept was applied more than 65 years ago in studies of the compressibility of wool.
In the numerical model that Goldstein and his colleagues devised, the push-and-pull of physical forces changes at various points along the ponytail. Near the base, swelling pressure and elasticity dominate. Beyond a few centimeters, the shape depends primarily on the pressure and the weight of the ponytail.
Beyond unveiling a theory of the ponytail, Goldstein and his colleagues also added a new term to the physics lexicon. They describe ponytail size by the “Rapunzel number,” a unit equal to the total length of the ponytail in centimeters divided by five. Five centimeters, Goldstein said, is about the length scale below which gravity does not bend the hairs much.
To test their model, the researchers predicted ponytail shapes for various hair lengths and compared them to the real thing. They started with 25-centimeter ponytails and then cut off five centimeters at a time to measure how shape changes with length. “We take real ponytails and we trim them back,” Goldstein said. “We reduce the Rapunzel number.”
So how did the grand unified ponytail theory fare in its tonsorial test? “The answer is, we do a pretty good job,” Goldstein reported.
12 Digital Issues + 4 Years of Archive Access just $19.99X |
There’s no debate against the fact that wind energy provides the cleanest source of power for both businesses and individuals. Wind turbines are often installed in coastal areas, between mountain gaps, or in areas where there’s a steady, consistent flow of wind. They can be installed in either vertical or horizontal axis. Most wind energy projects follow the horizontal axis model. Although wind energy is one of the most advantageous sources of energy available, it also has its fair share of demerits. This article covers some of them.
- Noise Disturbances – one of the most outstanding qualities of wind energy is that it doesn’t pollute the environment (including both air and water). However, it still generates a lot of noise. For this reason alone, wind energy should ideally not be installed near schools or residential premises. People who live near wind energy projects often complain of the huge noise. The visual pollution that this form of energy creates is also another key reason why most people are hesitant about installing turbines in their backyard.
- The threat to Wildlife – large wind energy projects pose a threat to wildlife on remote locations. Plainly, wind turbines themselves can be harmful to flying creatures such as birds. Studies also suggest that wild animals perceive wind turbines as a threat to their life. More so, since wind turbines need deep ground holes for installation, this could have an adverse effect on underground habitats.
- Unpredictability – wind is a non-renewable form of energy that cannot be depleted. Still, it has a major disadvantage since it can’t be predicted. This is why companies invest a lot of money and time studying different areas to ascertain their suitability for wind energy projects. Extreme weather phenomena, such as tornadoes and deadly hurricanes, can harm wind turbine installations.
- Region Limitations – wind energy is mostly used in coastal areas where there’s strong enough wind to generate power. Countries that do not have such coastal areas (and other areas suitable for turbine installation for that matter) have little chance of taking advantage of this environmentally friendly form of energy.
- Safety Concerns – over the last few decades, the frequency of extreme weather phenomena such as hurricanes, tornadoes, and cyclones has increased significantly. Severe storms that can cause extensive damage to wind turbines and that can be a safety hazard to people working in wind farms are common. Massive damage could be caused to wind turbines while permanent disability could result on the side of humans working on the farms.
- Visual Impact – there’s a conflicting debate around this demerit. Now, a section of people believe that wind turbines have an undesirable experience. There are numerous court petitions where people are looking to block wind turbines projects from happening in their immediate neighborhoods. But modern wind turbines happen to be very classy and attractive. They are a lot more than those old, rusty windmills that prove to be an eyesore in the land within which they are located. So the question of visual impact really depends on who you ask. There’re both sides of the coin to this one!
These disadvantages are however not to say that this form of energy is undesirable. Massive-scale wind energy projects are helping cut down fossil fuels, and decline global warming emissions. It’s just that not all areas will work with wind turbines. |
According to research, children achieve higher academic success in any subject when parents, actively support learning. Math skills are no different.
In today’s world of accelerated learning and technological advancements, it is more important than ever before for adults to help by engaging in the effort to learn.
Our world is in high demand of strong math skills, both in everyday life and the workforce. These demands will continue to increase over the course of the lifetimes of our kids.
One of the best ways to improve math skills, help them love math, and aid in their learning, is to incorporate it into daily life. Math is everywhere, and it can be fun.
“Mathematics expresses values that reflect the cosmos, including orderliness, balance, harmony, logic, and abstract beauty….” – Deepak Chopra
Tracking Time helps build Math Skills
As adults, it seems that we are always struggling to find time to do all that we need to do, or even want to do. Kids often experience the same issue.
To help your child learn how to identify ways that they are spending their time while building math skills, incorporate time tracking activities into their daily routines.
Not only will this help them identify methods for increasing their productivity, it introduces them to statistics, data analysis, improves critical thinking skills, and aids in allowing them to present their findings in a logical, organized manner.
Remember to incorporate watches, stopwatches, black paper, graph paper, rulers, and other measuring products to enhance time tracking activities.
Grocery Store Weigh-Ins
Grocery shopping is a wonderful means for kids to optimize their skills in measurement, adding, subtracting, and estimation.
To incorporate mathematics into your next grocery store trip, focus on weighing the fruits and measurements that are in the store. While in the produce section, inform your child that when you pay for the products in the department, the price is commonly based on weights.
Show your child the grocery scale, the signs, and allow them to pick products and weigh them. Then, encourage them to determine how much they would spend, based on the weights. Younger children could learn about shapes, which weighs more, and units of measurement in this activity.
All electronic devices – computers, smartphones, tablets – have calculator functions that are heavily utilized by people of all ages.
Children must learn to use special functions on calculators and enhance their knowledge of the operations available through calculators.
Simply encourage them to use a calculator in games, in counting, to identify answers to various mathematical questions, and encourage them to explain their answer, once they obtain it.
Let Us Help
Is your child struggling with math? Do they not enjoy the subject? Let us help! We here at Miracle Math approach mathematics in a fun, positive way that encourages kids to want to learn. Simply contact our team today to learn how we can help your child for the rest of their tomorrows. |
Linear Algebra Toolbox 2
In the previous part I covered a bunch of basics. Now let’s continue with stuff that’s a bit more fun. Small disclaimer: In this series, I’ll be mostly talking about finite-dimensional, real vector spaces, and even more specifically for some n. So assume that’s the setting unless explicitly stated otherwise; I don’t want to bog the text down with too many technicalities.
(Almost) every product can be written as a matrix product
In general, most of the functions we call “products” share some common properties: they’re examples of “bilinear maps”, that is vector-valued functions of two vector-valued arguments which are linear in both of them. The latter means that if you hold either of the two arguments constant, the function behaves like a linear function of the other argument. Now we know that any linear function can be written as a matrix product for some matrix M, provided we’re willing to choose a basis.
Okay, now take one such product-like operation between vector spaces, let’s call it . What the above sentence means is that for any , there is a corresponding matrix such that (and also a such that , but let’s ignore that for a minute). Furthermore, since a product is linear in both arguments, itself (respectively ) is a linear function of a (respectively b) too.
This is all fairly abstract. Let’s give an example: the standard dot product. The dot product of two vectors a and b is the number . This should be well known. Now let’s say we want to find the matrix for some a. First, we have to figure out the correct dimensions. For fixed a, is a scalar-valued function of two vectors; so the matrix that represents “a-dot” maps a 3-vector to a scalar (1-vector); in other words, it’s a 1×3 matrix. In fact, as you can verify easily, the matrix representing “a-dot” is just “a” written as a row vector – or written as a matrix expression, . For the full dot product expression, we thus get = (because the dot product is symmetric, we can swap the positions of the two arguments). This works for any dimension of the vectors involved, provided they match of course. More importantly, it works the other way round too – a 1-row matrix represents a scalar-valued linear function (more concisely called a “linear functional”), and in case of the finite-dimensional spaces we’re dealing with, all such functions can be written as a dot product with a fixed vector.
The same technique works for any given bilinear map. Especially if you already know a form that works on coordinate vectors, in which case you can instantly write down the matrix (same as in part 1, just check what happens to your basis vectors). To give a second example, take the cross product in three dimensions. The corresponding matrix looks like this:
The is standard notation for this construction. Note that in this case, because the cross product is vector-valued, we have a full 3×3 matrix – and not just any matrix: it’s a skew-symmetric matrix, i.e. . I might come back to those later.
So what we have now is a systematic way to write any “product-like” function of a and b as a matrix product (with a matrix depending on one of the two arguments). This might seem like a needless complication, but there’s a purpose to it: being able to write everything in a common notation (namely, as a matrix expression) has two advantages: first, it allows us to manipulate fairly complex expressions using uniform rules (namely, the rules for matrix multiplication), and second, it allows us to go the other way – take a complicated-looked matrix expression and break it down into components that have obvious geometric meaning. And that turns out to be a fairly powerful tool.
Projections and reflections
Let’s take a simple example: assume you have a unit vector , and a second, arbitrary vector . Then, as you hopefully know, the dot product is a scalar representing the length of the projection of x onto v. Take that scalar and multiply it by v again, and you get a vector that represents the component of x that is parallel to v:
See what happened there? Since it’s all just matrix multiplication, which is associative (we can place parentheses however we want), we can instantly get the matrix that represents parallel projection onto v. Similarly, we can get the matrix for the corresponding orthogonal component:
All it takes is the standard algebra trick of multiplying by 1 (or in this case, an identity matrix); after that, we just use linearity of matrix multiplication. You’re probably more used to exploiting it when working with vectors (stuff like ), but it works in both directions and with arbitrary matrices: and – matrix multiplication is another bilinear map.
Anyway, with the two examples above, we get a third one for free: We’ve just separated into two components, . If we keep the orthogonal part but flip the parallel component, we get a reflection about the plane through the origin with normal . This is just , which is again linear in x, and we can get the matrix for the whole by subtracting the two other matrices:
None of this is particularly fancy (and most of it you should know already), so why am I going through this? Two reasons. First off, it’s worth knowing, since all three special types of matrices tend to show up in a lot of different places. And second, they give good examples for transforms that are constructed by adding something to (or subtracting from) the identity map; these tend to show up in all kinds of places. In the general case, it’s hard to mentally visualize what the sum (or difference) of two transforms does, but orthogonal complements and reflections come with a nice geometric interpretation.
I’ll end this part here. See you next time! |
Humans use lasers for everything from scanning barcodes and putting on light shows to performing delicate eye surgery and measuring the distances between objects in space.
Cats also like to chase lasers, but I wasn’t sure how they worked. I asked my friend Chris Keane, a physics professor at Washington State University. Keane came to WSU from the National Ignition Facility at Lawrence Livermore National Laboratory where he helped work on a laser as big as a football stadium.
The light we see
First, we have to know a bit about light. Whether it’s light from our sun or your flashlight, light travels in tiny bundles called photons. It normally radiates out in all directions from its source, like the Sun, for example.
It turns out we can also find light energy stored in the atoms, or building blocks, that make up materials inside a laser pointer. There are different materials we can use in lasers, but some popular ones are gases like neon and helium. You may have seen neon atoms at work in a bright, glowing sign. You may also have filled up a balloon with helium atoms to make it float.
Atoms like these are sometimes really excited and other times they are at rest, or at their ground state. One way we can make some of these atoms really excited is to give them a source of energy, something like really strong flash of light or a jolt of electricity from the battery in a laser pointer.
Keane explained that under just the right conditions, you can get more excited atoms than resting atoms inside the tube of your laser pointer. When scientists were experimenting with different kinds of laser materials, they made excited helium atoms collide with resting neon atoms.
Atoms will normally emit photons when they transition from a particular excited state to a resting state. When there are more excited atoms than resting atoms, the first atom to emit light will trigger a kind of chain reaction and a lot of light will build up inside the pointer.
A chain reaction
There are also two mirrors in a laser pointer that help keep our chain reaction going. It’s a different process, but in a way it reminds me of how we plug a guitar into an amplifier to increase its volume. But with lasers, instead of amplifying sound, we amplify light. LASER actually stands for Light Amplification by Stimulated Emission of Radiation.
The opening on one end of the laser is the light’s way out. It doesn’t radiate in all directions, but builds up in one very straight, focused point that we usually see as a bright red dot.
Lighting up our universe
We don’t find lasers in nature. We have to make them in factories or labs. But there are naturally occurring “light amplifiers” in our universe. These are similar to our lasers, except they don’t have any mirrors. We usually find them out in big clouds of gas where there are more excited atoms than resting atoms, which results in some brilliant light. |
Climb up Word Ladders to build reading, spelling, vocabulary, and phonics skills! Designed to take approximately 10 minutes, students begin with one word and then make a series of other words by changing or rearranging the letters in the word before; clues are provided along the side to help kids know what words to write. 112 reproducible pages, softcover. Answer key included. Grades 4-6. |
Who are the Reef Fishers?
Traditionally, Indo-Pacific coral reefs were fished by nearby coastal communities who often owned them under customary laws and practices. Customary tenure arrangements vary enormously from place to place, with ownership of certain areas or species groups vested in different clans, families, tribes or other social units. By and large, however, coastal marine resources were not considered to be common property until after European contact. In many locations, especially remote ones, customary tenure is still very strong. Close to market outlets and urban centres, however, customary tenure of marine resources is increasingly ignored in the face of commercial pressures and opportunities.
Customary tenure was often accompanied by strict controls on which members of society could use which fishing methods, and where. Certain techniques were restricted to resource-owning clans or tribes, while, for example, women were commonly prohibited from using nets or boats. Traditional fishing techniques typically included locally-made nets, traps, fish fences and corrals, fibre lines carrying one or more baited hooks, spears and arrows, and traditional poisons, operated from the shore, on foot, or from paddling or sailing canoes. These gears have now been supplemented or replaced by technologically more advanced gears, including synthetic nets and lines, mechanised fishing vessels, trawls and dredges, modern poisons, and explosives. Some of these fishing methods are destructive and cause damage to the reef environment, and to non-target organisms.
Fishers today are largely the same as those in the past – coastal communities, including men, women and children, harvesting resources for food and as a source of cash income. However with the progressive demise of customary marine tenure arrangements, and increasing commercial pressures on coral reef fisheries, greater numbers of ‘outsiders’ – people with no traditional connection to the resources in question – are increasingly involved in harvesting them. In many situations this leads to conflicts with the traditional resource users, who still consider themselves the owners, even though this may not be articulated through modern laws. Competition for resources also leads to the ‘tragedy of the commons’ in which competition leads to resource over-exploitation because users have no vested interest in resource conservation for the long term.
Source: Gary Preston (written for ReefBase) |
What is Diabetes?
Diabetes is a disease where your blood sugar levels are above normal because your cells are unable to absorb glucose, hence the sugar stays in your blood. Most people do not even know they have diabetes. This disease causes serious complications in your health including heart disease, blindness, kidney failure, and lower extremity amputations. Diabetes is the 6th leading cause of death in the US and majority of the patients developed heart disease.
There is always an underlying cause why your body is not able to utilize glucose for energy making the glucose levels in your blood to increase above normal. You have to remember that the cells in your body that use glucose must be able to absorb glucose from your blood effectively and use it to give them energy. Another thing is that the insulin made by your pancreas are very important as they are used as a vessel to let the sugar enter the cell. Last thing is that glucose is broken down by food or from muscle and liver from a storage called glycogen.
In one form of diabetes, the body stops making insulin so your cells won’t be able to get glucose from your blood. Sometimes, your body does not make enough insulin as much as your body needs. Other times, the cells won’t open up. So even if you have enough insulin, you can’t get the cells to open, so the cells won’t be able to receive glucose for energy. This is called insulin resistance.
Type of Diabetes
Type I diabetes is usually diagnosed in children and young adults. In type 1 diabetes, the pancreas doesn’t make any insulin at all.
Type II diabetes is the most common form of the disease. It accounts for 90-95% of all the cases of diabetes. In type 2 diabetes, either your body doesn’t make enough insulin or the cells in your body ignore the insulin so they can’t utilize glucose like they are supposed to. When your cells ignore the insulin, it is often referred to as insulin resistance.
Other types of diabetes which only account for a small number of the cases of diabetes include gestational diabetes, which is a type of diabetes that only pregnant women get. If not treated, it can cause problems for mothers and babies and usually disappears when the pregnancy is over. Other types of diabetes resulting from specific genetic syndromes, surgery, drugs, malnutrition, infections, and other illnesses may account for 1% to 2% of all cases of diabetes.
How do you get diabetes?
Risk factors for Diabetes II:
- older age
- family history
- prior history of gestational diabetes
- impaired glucose tolerance
- physical inactivity
Risk factors for Diabetes I:
- environmental factors
People who think they might have diabetes must visit a physician for a diagnosis.
- frequent urination
- excessive thirst
- unexplained weight loss
- extreme hunger
- sudden vision changes
- tingling or numbness in hands or feet
- feeling very tired much of the time
- very dry skin
- sores that are slow to heal
- more infections than usual
Nausea, vomiting, or stomach pains may accompany some of these symptoms in the abrupt onset of type 1 diabetes.
Glucose is sugar, so do all I just have to avoid sweets?
Most food, and all of the carbohydrates, are broken down into its simplest structure, which is glucose. The acid in your stomach breaks down the food as soon as it enters the stomach. Proteins are broken down into their amino acids and carbohydrates are broken down to glucose. The blood then picks it up and carries it to your cells for energy. In healthy people, the blood picks up the glucose absorbed from the GI tract, and sends a signal to your pancreas to make and release insulin. Remember, in Type 2 diabetes your body doesn’t make enough insulin, or some of your cells ignoring the insulin that is there. In both situations, your cells don’t get the glucose they need for energy and they are starving while all the extra glucose is just floating around in your blood and can’t be used. The worst part is, when all that extra glucose is floating around in your blood, it is causing damage to your blood vessels and organs and that damage increase your risk of heart disease. That is why it is very important to keep your blood glucose levels as close to normal as possible. When the glucose levels get really high, the glucose starts to leak out into your urine.
Diabetes I: Healthy eating, physical activity, and insulin injections are the basic therapies. The amount of insulin taken must be balanced with food intake and daily activities. Blood glucose levels must be closely monitored through frequent blood glucose testing.
Diabetes II: Healthy eating, physical activity, and blood glucose testing are the basic therapies. Many people with type 2 diabetes require oral medication, insulin, or both to control their blood glucose levels. Some of the oral medications work by stimulating your pancreas to make more insulin. Other oral medicines work to make your cells open up again.
How to keep blood sugar level under control?
Frequent blood tests are used to monitor your blood sugar. Most patients with diabetes should have a home blood monitoring kit. Some doctors ask their patients to check their blood sugar as frequently at 6 times a day, though this is an extreme. The more information you have about your blood sugar levels, the easier it will be for you to control it. People with diabetes must take responsibility for their day-to-day care, and keep blood glucose levels from going too low or too high.
When your blood sugar is too high, your doctor refers to it as hyperglycemia. You may not experience any symptoms, but the high levels of glucose in your blood is causing damage to your blood vessels and organs. That is why it is important to have your body utilize the sugar properly and get it out of your bloodstream.
When your blood sugar is too low, your doctor refers to it as hypoglycemia. Having low blood sugar can be very dangerous and patients taking medication for diabetes should watch for symptoms of low blood sugar.
It is important that you monitor your blood sugar regularly to avoid both low as well as high blood sugar.
Some patients are may not follow the proper diet and exercise except for the days leading up to a blood test in the doctor’s office. They want to look like they are doing a good job controlling their blood sugar. This way their fasting blood glucose test results will be good for the doctor. But, there is a test that will show your doctor the real picture over the past 3 months or so. It is called the hemoglobin A1C (HbA1C) test. Hemoglobin is the part of your blood, or red cells, that carries oxygen to your cells. Glucose sticks to the hemoglobin in your red cells of the blood as they emerge from the bone marrow where they are made.
The amount of sugar on the red cell is proportionate to the blood sugar level at the moment the red cell goes into circulation, and remains at that level for the life of the red cell. So if there has been a lot of extra glucose in your blood, there will be a lot of glucose stuck all over your hemoglobin. Since the average lifespan of the hemoglobin in your blood is 90-100 days, a HbA1C test shows a doctor how well you have been controlling your blood sugar over the last 3 months. This test is a check on the overall sugar control, not just the fasting blood sugar. So it is important to control your blood sugar at all times, and not just before visiting the doctor. The most important reason to control your blood sugar is so that you can live a longer, healthier life without complications that can be caused by not controlling your diabetes.
The complications of diabetes can be devastating. Both forms of diabetes ultimately lead to high blood sugar levels, a condition called hyperglycemia. The damage that hyperglycemia causes to your body is extensive and includes:
- Damage to the retina from diabetes (diabetic retinopathy) is a leading cause of blindness.
- High blood pressure and high cholesterol and triglyceride levels. These independently and together with hyperglycemia increase the risk of heart disease, kidney disease, and other blood vessel complications.
- Damage to the nerves in the autonomic nervous system can lead to paralysis of the stomach (gastroparesis), chronic diarrhea, and an inability to control heart rate and blood pressure with posture changes.
- Damage to the kidneys from diabetes (diabetic nephropathy) is a leading cause of kidney failure.
- Damage to the nerves from diabetes (diabetic neuropathy) is a leading cause of lack of normal sensation in the foot, which can lead to wounds and ulcers, and all too frequently to foot and leg amputations.
- Diabetes accelerates atherosclerosis or “hardening of the arteries”, and the formation of fatty plaques inside the arteries, which can lead to blockages or a clot (thrombus), which can then lead to heart attack, stroke, and decreased circulation in the arms and legs (peripheral vascular disease).
Hypoglycemia, or low blood sugar, occurs from time to time in most people with diabetes. It results from taking too much diabetes medication or insulin, missing a meal, doing more exercise than usual, drinking too much alcohol, or taking certain medications for other conditions. It is very important to recognize hypoglycemia and be prepared to treat it at all times.
- poor concentration
- tremors of hands
Diabetic ketoacidosis is a serious condition in which uncontrolled hyperglycemia (usually due to complete lack of insulin or a relative deficiency of insulin) over time creates a buildup in the blood of acidic waste products called ketones. High levels of ketones can be very harmful. This typically happens to people with type 1 diabetes who do not have good blood glucose control. Diabetic ketoacidosis can be precipitated by infection, stress, trauma, missing medications like insulin, or medical emergencies like stroke and heart attack.
Hyperosmolar hyperglycemic nonketotic syndrome is a serious condition in which the blood sugar level gets very high. The body tries to get rid of the excess blood sugar by eliminating it in the urine. This increases the amount of urine significantly and often leads to dehydration so severe that it can cause seizures, coma, even death. This syndrome typically occurs in people with type 2 diabetes who are not controlling their blood sugar levels or have become dehydrated or have stress, injury, stroke, or medications like steroids.
Pre-diabetes is a common condition related to diabetes. In people with pre-diabetes, the blood sugar level is higher than normal but not high enough to be considered diabetes. Pre-diabetes increases your risk of getting type 2 diabetes and of having heart disease or a stroke. Pre-diabetes can be reversed without insulin or medication by losing a modest amount of weight and increasing your physical activity. This can prevent, or at least delay, onset of type 2 diabetes. When associated with certain other abnormalities, it is also called the metabolic syndrome.
- Fasting blood glucose test. This test is performed after you have fasted (no food or liquids other than water) for eight hours. A normal fasting blood glucose level is less than 100 mg/dl. A diagnosis of diabetes is made if your blood glucose reading is 126 mg/dl or higher. (In 1997, the American Diabetes Association lowered the level at which diabetes is diagnosed to 126 mg/dl from 140 mg/dl.)
- “Random” blood glucose test. A normal blood glucose range is in the low to mid 100s. A diagnosis of diabetes is made if your blood glucose reading is 200 mg/dl or higher and you have symptoms of disease such as fatigue, excessive urination, excessive thirst or unplanned weight loss.
- Oral glucose tolerance test. For this test, you will be asked, after fasting overnight, to drink a sugar-water solution. Your blood glucose levels will then be tested over several hours. In a person without diabetes, glucose levels rise and then fall quickly after drinking the solution. In a person with diabetes, blood glucose levels rise higher than normal and do not fall as quickly.
A normal blood glucose reading two hours after drinking the solution is less than 140 mg/dl, and all readings between the start of the test until two hours after the start are less than 200 mg/dl. Diabetes is diagnosed if your blood glucose levels are 200 mg/dl or higher. |
Magnesium is an important mineral, and its presence in every cell of living beings is vital for many biological processes. The chemical symbol of magnesium is Mg and it is one of the most abundant elements in the human body. Generally, it is found in the form of ions in living cells, but its concentration can vary.
The amount of magnesium in an adult human body is about 24 grams; out of which, almost 50% can be found in human bones. The remaining 50% can be found in body cells and a small amount, approximately 1%, can be found in blood.
Causes of Magnesium Deficiency
- Magnesium is a crucial mineral nutrient for carrying out hundreds of biochemical activities inside the human body. In the human body, magnesium is mainly absorbed in the gastrointestinal tract and is carried to the cells by blood circulation. Therefore, a healthy intestine can ensure adequate absorption of magnesium. Hence, any kind of disorder in the intestine and digestive system can cause this deficiency.
- Another reason for this condition may be its poor intake. Many people are not aware of the importance of a balanced diet, and hence, may not include magnesium-rich foods in their diet. Some studies conducted in the US reveal that a large number of Americans do not include adequate amounts of magnesium in their daily diet.
- Excretion of large amounts of magnesium through urine can also be a reason behind its deficiency. If the kidneys are healthy, they can prevent excessive excretion of magnesium. But, sometimes diabetes and overconsumption of alcohol can also result in excessive loss of magnesium through urine.
- Certain medications can also cause this problem. Medicines used in the treatment of cancer, or antibiotics, and diuretics can cause this deficiency in some individuals. Examples of such medicines are Bumex, Edecrin, Lasix (diuretics), Gentamicin (antibiotics), etc. It can also occur due to parathyroid diseases. It is also caused by low levels of hydrochloric acid which reduce levels of useful intestinal bacteria that facilitate the absorption of magnesium.
Diseases Caused by Magnesium Deficiency
- Lack of magnesium in the human body can cause many diseases. Since it helps in calcium absorption, its deficiency may result in low level of calcium in blood. This in turn affects the bones, and can cause osteoporosis. Studies have shown that it may be an important factor causing postmenopausal osteoporosis. It can also result in the reduction of potassium levels in the human body.
- Loss of appetite, vomiting tendency, weakness, and fatigue are some of its symptoms. Low levels of magnesium in the human body can cause migraine, muscle cramps, allergies, abnormal heartbeat, fibromyalgia, coronary spasms, attention deficit disorder, etc.
- Magnesium is an important element that ensures proper lung function, and therefore, it can be effective in treating asthma. When the level of this element is low in the body, adrenaline secretion increases, thereby leading to anxiety.
Role of Magnesium in the Human Body
- Magnesium plays an important role in regulating blood pressure. Blood pressure can be effectively controlled by consuming magnesium-rich foods. It is also good for those suffering from diabetes. Diabetes results in insufficient production of insulin required to convert starch into energy. Magnesium is an important element which facilitates carbohydrate metabolism, and hence, plays an important role in the treatment of diabetes. Allergies are caused when histamine level in the body increases. Magnesium plays an important role in the treatment of allergies as it reduces the level of histamine.
- Green vegetables, whole grains, peanuts and nuts, oysters, legumes, and soy milk are some foods that are rich in magnesium. Eating these foods in large amounts is beneficial for people suffering from this condition.
- According to scientists, almost 300 biochemical reactions require the presence of magnesium for their smooth operation. Hence, intake of sufficient amounts of magnesium is very important for a healthy and long life. People suffering from allergies, asthma, diabetes, osteoporosis, and cardiovascular diseases should include foods rich in magnesium in their daily diet. |
Often when a story or its characters or plot resonate with us, it is because some element of the text is representative of conditions or individuals in our society and world. More often than not such representativeness carries political implications as well - leading Foster to highlight the importance of understanding the political undertones of a literary piece. Foster distinguishes between overtly political writing which includes literature whose main intent is to influence the prevailing political thought/ideology and “political” writing that is more subtle and perhaps more effective. Political writing offers a perspective into the realities of the world and in doing so touches upon themes and problems that are collectively shared and thus relatable. Edgar Allen Poe for instance provides a criticism of the European class system and its elitism in his poems “The Masque of the Red Death” and “The Fall of the House of Usher.” Both poems look at the conditions and practices of the nobility, and emerge as commentary upon the systems of monarchy and aristocracy.
Political undertones can also be found in seemingly apolitical texts such as Rip Van Winkle. Because political considerations are closely entwined with social, economic, historic and cultural issues, it is unsurprising that many texts can be said to be political in nature. Consequently, Foster argues that knowing something about the political and social context in which the writer was writing is significant for it can add a dimension to the text which readers, in their own unique political settings, might not have realized.
In chapter 14 Foster analyzes the Christian trope found in works of European and American literature. The dominance of cultural influences brought by early European settlers has meant that Christian values have been deeply woven in our social fabric, the consequent of which is that we live in a Christian culture. This influence can be ascertained in works of literature as well, in fact, texts draw so heavily upon this religious tradition that knowledge of the Old and New Testaments is quite essential. It is important to note that the values that appear in a text, while technically “Christian,” need not take on a religious role but are more significant in revealing something about the character, plot, or theme of the story.
One of the more frequent Biblical archetypes used in literature is the figure of Christ, and Frost recommends familiarizing oneself with certain features of his character that appear in various guises in literary texts. These include qualities such as self-sacrificing, closeness with children, loaves, fish, water and wine, thirty-three years of age, crucifixion, and so forth. While some literary figures closely resemble Christ (Ernest Hemingway’s The Old Man and the Sea is replete with Christian imagery) others are more ambiguous, and indeed do not even have to embody the characteristic features of being male, Christian, or even good (in the latter’s case, the parallel to Christ figure becomes an irony). Allusions to Christ can have various effects, from emphasizing the character’s sacrifice by relating it to Divine sacrifice, ushering notions or hope/redemption/miracle or even portraying the character as much smaller by highlight the discrepancy between him/her and the figure of Christ.
Although flight is not a skill humans can lay claim to, our fascination with flying has remained with us to this day. It is unsurprising, then, that flight should feature so prominently in literature, and what is perhaps more pertinent is the literary implications it carries. Writers from the time of Greek mythology - and possibly well before - have ascribed various meanings and symbolic significance to descriptions of flight. Flying can represent freedom, escape, exuberance, largeness of spirit, even love, but if gone wrong it can also symbolize downfall (in the metaphoric and literal sense), danger, and helplessness. For the most part, however, flight according to Foster represents freedom and the motif is found and manipulated in various ways in different literary works.
Flight doesn't only have to appear in the literal sense, however: figurative flights are just as laden with meaning. In A Portrait of the Artist as a Young Man, James Joyce presents his protagonist as someone who feels trapped by the social, religious, political and personal constraints and the struggle to throw off these fetters, so to speak, conveys a distinct sense of metaphorical flight that is further compounded by the images of birds, feathers and flying in the second half of the novel. Often, flight is also a stand-in for a freeing of the spirit or soul into realms that reach our furthest imaginations. Flying, then, opens a host of possibilities, for the character and text in question, as well as for the analytical reader.
The featuring of Christ in literature is a complex literary convention, one that has been in use for centuries. Readers can perhaps get a sense of how heavy and lasting this feature is when we consider Rosemary Woolf's research. Woolf draws parallels between the knight figure and Christ, arguing how the latter was represented in medieval culture (and perhaps even before) as the knight who embarks on the quest. This is certainly a fresh way to consider the importance of the journey and knight motif that Foster presents in the first chapter. More importantly, it underscores how pervasive Christ has been in literature, in forms that are perhaps not easily recognizable. This affirms Foster's argument, then, that literature that contains Christ symbolism may not be completely true to the persona that emerges in religious texts or scripture, which is to say, not all characteristics of Christ may be outlined. Woolf's studies can also help us see how perhaps many literary conventions - including but not limited to the quest/knight motif - trace their origins to Christianity.
Of course the relationship is never so simple - Christianity or its perceptions may in turn be informed by culture including literature. One should also note that there is a difference between overtly religious literature and literature that contains religious references - the latter is what Foster is referring to when discussing the Christ trope. The subtle difference between the two can be assessed by noticing for what purposes the character which resembles Christ is used - religious literature is likely to explicitly identify the cause as one of salvation or Divine intervention whereas this is not the central theme or indeed purpose of other literary texts. Robert Detweiler who studies the Christ trope in American Fiction makes an interesting argument, saying “Perhaps the creation of the Christ figure has to remain the task of the secular writer, for the religious novelist who attempts to work with it finds himself caught in an uneasy liaison: the doctrinal Jesus he propagandizes and the symbolic Christ he tries to fashion invariably get in the way of each other, so that eventually both the art and the all-important message of his story suffer.” While ultimate conclusions on the 'effectiveness' of the Christ persona - and even whether the question of effectiveness matters at all - rests with the reader, it is nonetheless interesting to consider how Christ fares in secular literature such as that which Foster analyzes.
Foster's analysis of flight echoes conclusions drawn by Swiss psychologist Carl Jung who identifies flying as symbolic of freedom, escape, attempts for liberation and so forth, pointing to the almost universal meaning that we attach with such actions. Foster discusses how flying can appear in various forms, through imagery, plot, themes and so forth but it is also helpful to consider other symbols that represent flight - and which carry more or less the same meaning as flying itself. These objects or symbols include the sky, birds, feather, wind, clouds and shooting stars to mention a few. Flying and its symbolism, then, can be communicated through indirect means and doesn't necessarily involve characters who dream of flying, or who fly themselves.
An important theme related to flying is also what it reveals about the human attraction to the unknown. Flying into the skies or heavens, in others words to realms that we are unfamiliar with and which are "foreign" suggests a basic human curiosity of what lies beyond the world we know as well as the tendency to seek out new territory. It is an example of man's consistent efforts to pursue knowledge, to assert himself or his presence in various spheres of life. In many ways flying is like traveling - but perhaps with more spiritual or metaphysical implications because of the otherworldly quality of the place of destination. An interesting assignment for literary students would be to compare the literary symbolism of ocean/sea journeys, land travels, and aerial flights.
Flight has also been a prominent trope in African-American literature and it is worth considering perspectives of scholars in this field because their analysis differs from Foster's and indeed from the common interpretations of flying. Scholars such as Guy Wilentz focus on folk legends of flight and resistance to argue that there are historic-culture specific symbolism associated with flying. In other words, different communities have varied understandings and interpretations of flight. For the African-American community, myths and traditions of flight have been specific to the escape/freedom from the shackles of slavery. This takes on even more complexity when we note that authors use 'slavery' in various ways in their works- it can be used to mean the actual physical ordeal that a community experiences, or even to mental slavery to self-perceptions/thoughts and/or to society’s perspectives and judgment. |
object-oriented programming, a modular approach to computer program (software) design. Each module, or object, combines data and procedures (sequences of instructions) that act on the data; in traditional, or procedural, programming the data are separated from the instructions. A group of objects that have properties, operations, and behaviors in common is called a class. By reusing classes developed for previous applications, new applications can be developed faster with improved reliability and consistency of design. The first object-oriented programs, written in the language Simula 67, were used extensively for modeling and simulation, primarily in Europe during the late 1960s and early 1970s. The technique was popularized in the United States during the following decade using the language SmallTalk and achieved its greatest prominence with the development of the object-oriented language C++ during the late 1980s and 1990s.
See P. W. Oman and T. G. Lewis, Milestones in Software Evolution (1990); T. Budd, An Introduction to Object-Oriented Programming (1991); P. Varhol, Object-Oriented Programming: The Software Development Revolution (1993); P. Coad and J. Nicola, OOP, Object-Oriented Programming (1993).
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. |
Use These Activities
Putting It All Together
Students might enjoy comparing the way the events are recounted in the readings with the way they are presented in a U.S. history textbook and discussing the differences in perspective. In studying nationally important events, students do not always learn how these events evolve from local issues or how national debate and decision affect individual communities. The following activities will encourage students to make those connections.
Activity 1: Locating a Railway
Have students refer back to Map 4 and identify the railroad nearest their community or region. Discuss whether and to what degree railroads were important to townspeople in the 19th century. Local histories found at public libraries usually have a chapter devoted to the coming of the railroad. Some students might wish to research this topic and present a report to the class. (Students in Hawaii, Alaska, or the Territories might choose to look at a community they have visited or would like to visit.)
Activity 2: Examining Trials
After students have discussed the Dred Scott case, have them look up the meaning and discuss the following court-related words: plaintiff, defendant, prosecutor, judge, defense attorney, jury, verdict, appeal, Supreme Court, civil case, criminal case, precedent, litigation, bailiff.
If a class visit to an actual trial is possible, prepare the students by asking them to choose a particular person involved in the case with whom to identify. Back in the classroom, have those representing the plaintiff, defendant, judge, etc., meet in their respective groups and discuss the following issues:
1. How well each of the attorneys presented his/her case.
2. The approaches taken by the plaintiff, defendant, judge, etc., as they performed their roles.
3. How the students would have acted if they had assumed those roles.
4. How they felt about the verdict.
Activity 3: Local and National Connections
Have the students search for examples of how their own community is currently connected with the broader events of the nation. After they have found recent newspaper articles that explore issues of public concern (e.g., interstate environmental issues, civil rights or abortion rights controversies), have the students determine where in their community such issues are debated and discussed. Then have them write a short essay in which they discuss:
1. Whether or not the same degree of public interest is aroused as in the railroad controversy of the 1840s-60s.
2. Whether or not there is a single site in their community that serves the same purpose as did the Old Courthouse did.
Finally, ask the students to discuss the essays and the role of public buildings in modern communities.
Activity 4: Historic Preservation
Have the students identify an older public building in their own community and research its original purpose and its uses over time. Ask them to answer the following questions:
1. What purpose did this building serve? Is that function still important to the community? Did any important events take place here? If so, why were they important?
2. Is the building in use or vacant?
3. If in use, is the building still used for its original purpose or has it been adapted for another?
4. If the building is vacant, has another building assumed its original purpose?
5. Should the building be restored? What kinds of adaptive use would be feasible?
If possible, have a local preservation expert visit the class to discuss these questions with the students and to explain how decisions are made as to whether or not to preserve such buildings. |
An HTML file is nothing more than plain ASCII text, but all HTML files must have a special file extension for web browsers to recognize them. This extension is either .htm OR .html. The reason why there are two possible extensions goes back to the early days of the web. People were still using Windows 3.1 and DOS then, and Windows 3.1 and DOS file extensions could only be 3 letters long, hence .htm. Now .html is the more commonly accepted extension. For this course, we will be using the .html extension.
As you create web pages, it is important to keep them all together. This makes them easier to find on your computer, and makes it easier when you are ready to make your website go "live" on the Internet. So before we start saving files, let's create a folder where they will go.
Now you are ready to save the file:
You should always check your web page in a browser to make sure it looks okay.
To Preview Your File in Internet Explorer:
(Most browsers use similar commands to open a file - start with the File menu and look for "Open Page" or "Open File".)
Look at my index.html . When the page opens, click the View menu and select Source to see the HTML code.
Note: As you work on your web page, you should keep your web browser and Notepad open. If you keep your index.html file open in your web browser, all you need to do after saving any changes you make to the file in Notepad is click the Reload or Refresh button in your browser window to see the changed index.html file. This means that you should have two windows open:
Next: Adding Lists |
Wolf–Rayet stars (WR stars) are evolved, massive stars (over 20 solar masses initially). They are losing mass rapidly by means of a very strong stellar wind, with speeds up to 2000 km/s. While our own Sun loses about 10−14 solar masses every year, Wolf–Rayet stars typically lose 10−5 solar masses a year.
Wolf–Rayet stars are extremely hot, with surface temperatures in the range of 30,000 K to around 200,000 K. They are also highly luminous, from tens of thousands to several million times the bolometric luminosity of the Sun, although not exceptionally bright visually since most of their output is in far ultraviolet and even "soft" X-rays.
Clarification of terms[change | change source]
In astronomy, luminosity is not quite the same thing as brightness. Luminosity measures the total amount of energy emitted by a star or other astronomical object in SI units of joules per second, which are watts. A watt is one unit of power, and just as a light bulb is measured in watts, so is the Sun, which has a total power output of 3.846×1026 W. This number is the basic metric used in astronomy: it is known as 1 solar luminosity, the symbol for which is .
Radiant power, however, is not the only way to conceptualize brightness, so other metrics are also used. The most common is apparent magnitude, which is the perceived brightness of an object from an observer on Earth at visible wavelengths. Other metrics are absolute magnitude, which is an object's intrinsic brightness at visible wavelengths, irrespective of distance. The measure of luminosity is "bolometric magnitude", the total power output across all wavelengths.
References[change | change source]
- Cannizzo J.K. 1998. "Ask an astrophysicist: Wolf-Rayet Stars". Nasa.gov. http://imagine.gsfc.nasa.gov/docs/ask_astro/answers/980603a.html.
- Sander A; Hamann W.R. & Todt H. 2012. The galactic WC stars. Astronomy & Astrophysics 540: A144.
- X-rays with lower energy |
The High Renaissance is rooted in the Italian art that developed at the end of the Middle Ages. It was fully present in all the arts of the day, and was in conflict with the nationalist schools of painting and architecture in all European countries. The High Renaissance reaches a culmination in the XVI century, marking a return to the artistic values of the antiquity.
After a first wave consisting of artists like Brunelleschi, Alberti Bramante, Donatello, Verrochio, Pollaiuolo, Fra Angelico, and Giovanni Bellini, there came names like Michelangelo, Leonardo Da Vinci, Raphael and Titian.
The High Renaissance was also paralleled by the literature of the XV and XVI century, through Ariosto, Machiavelli, de Bembo.
It was also the age of the great patrons of the arts. Lorenzo de Medici, Ludovico Sforza, popes Julius II, Leon X, and Alexander VI funded and encouraged artists and writers alike.
The High Renaissance is not only a style, fashion of the arts; it precluded the dawn of Rationalism in Europe, it eroded the principles of authority and autarchy by way of glorifying the individual. It fought dogmas, and ultimately led to the religious crisis of the Reformation. |
Prices and Markets Review Questions
Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
What is the basis for trade?
A country (or firm) has a comparative advantage over another in the production of a good if it can produce it at a lower opportunity cost.
What is absolute advantage?
Absolute advantage means that an economy can produce a good for lower costs than another. It means that less resources are needed to produce the same amount of goods.
What is comparative advantage?
Law of comparative advantage says that two countries (or firms) can both gain from trade if, in the absence of trade, they have different relative costs for producing the same goods.
What would you like to do?
Home > Flashcards > Print Preview |
Some geometric terms and results used in terms of words.
Some Geometric Terms and Results:
• The sum of all the angles at a point is 360°.
i.e., ∠1 + ∠2 + ∠3 + ∠4 + ∠5 + ∠6 = 360°
• The sum of all the angles about a point on a straight line on one side of if it is 180°. i.e., ∠1 + ∠2 + ∠3 + ∠4 = 360°
Some Important Geometric Terms Used:
1. Equal angles:
Two angles are said to be equal if they have the same degree measure. ∠MNO and ∠XYZ are equal angles of measure 90°.
2. Bisector of an Angle:
The ray which divides the given angle into two equal angles is called an angle bisector.
In the adjoining figure, the ray BD divides ∠ABC into two equal angles ∠ABD and ∠DBC i.e., ∠ABD = ∠DBC.
Two lines in a plane are said to be perpendicular if they intersect in such a way that the angles formed between them are right angles. In the adjoining e, lines PQ and RS intersect at 0 such that ∠ROQ = ∠ ROP = ∠POS = ∠QOS = 90°.
Therefore, we say that PQ is perpendicular to RS, i.e., (PQ ⊥ RS).
It is the line which passes through the midpoint of the given line segment and is also perpendicular to it. Here, MN is the line segment. PQ is the perpendicular bisector as ∠ POM = ∠ PON = 90° and MO = ON.
Some geometric terms and results are explained along with the specific figure. |
Priapulids are an obscure group of marine worms that live in shallow waters.
“Surprisingly, priapulids form the gut like humans, fish, frogs, starfish and sea urchins –and all of them even use the same genes. It does not mean that these penis worms are now closely related to humans. Instead the fact that different animals share a common way of forming the gut suggests that the embryological origins of the human intestine and how it develops are much older than previously thought – most likely over 500 million years, when the first bilaterally symmetric animals appeared on Earth” remarks Hejnol.
The study, featured online on the 25th of October in the journal “Current Biology”, represents the first description of the entire embryonic development of these enigmatic animals.
“Priapulids are important for understanding the evolution of animals, because they are thought to be among the first bilaterally symmetric animals and have changed very little since the earth’s Cambrian Period” says first author Dr José M. Martín-Durán.
Bilaterally symmetric animals (99% of all animals) are those with a left and right body side. Historically, they have been divided into two large groups based on major differences in how the gut develops in the embryo. The intestine is an essential organ, and that is why it is present in nearly all animal species. The gut develops very early, when some cells move towards the inside of the embryo, usually at a defined region that is called the ‘blastopore’.
“The important point is that in some animals this region becomes the mouth, while in others it becomes the anus. For more than a century, this difference has captivated scientists, but there is not a completely satisfactory explanation for it yet” explains Hejnol.
The work shows how important it is to study the vast diversity of animals found in the oceans.
“Priapulids still hide a lot of secrets to unravel, which will have a great influence on our understanding of the origin of other major organs, such as the brain, blood or legs” concludes Hejnol.
They reproduce in winter time, so the scientists have to travel regularly to the west coast of Sweden during the ice-cold season to get a hold of them.
“We sail the fjords dredging in areas where they are abundant, collecting animals and later getting embryos from them in the lab. Although thrilling, sometimes the collection trips turn into real adventures, with low temperatures, snow or even frozen waters” says Martín-Durán.
The research, carried out by the developmental biologists Dr José M. Martín-Durán and Dr Andreas Hejnol at the Sars International Centre for Marine Molecular Biology in collaboration with Dr Ralf Janssen, Dr Sofia Wennberg and Dr Graham E. Budd (Uppsala University, Sweden), features online on the 25th of October in the journal “Current Biology”. The collection trips were funded by the European Union Infrastructures program “ASSEMBLE”.
Reference: Deuterostomic Development in the Protostome Priapulus caudatus. 2012. J.M. Martín-Durán, R. Janssen, S. Wennberg, G. E. Budd, A. Hejnol. Current Biology: doi:10.1016/j.cub.2012.09.037 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.