content
stringlengths 275
370k
|
---|
32. 3D User Interfaces
Ever since the advent of the computer mouse and the graphical user interface (GUI) based on the Windows, Icons, Menus, and Pointer (WIMP) paradigm, people have asked what the next paradigm shift in user interfaces will be (van Dam, 1997; Rekimoto, 1998). Mouse-based GUIs have proven remarkably flexible, robust, and general, but we are finally seeing a major sea change towards "natural" user interfaces (NUIs), not only in the research lab, but also in commercial products aimed at broad consumer audiences. Under the NUI umbrella, there are two broad categories of interfaces: those based on direct touch, such as multi-touch tablets (Wigdor & Wixon, 2011), and those based on three-dimensional spatial input (Bowman et al., 2005), such as motion-based games. It is this latter category, which we call three-dimensional user interfaces (3D UIs), that we focus on in this chapter.
32.1 What are 3D User Interfaces?
Like many high-level descriptive terms in our field (such as "virtual reality" and "multimedia"), it's surprisingly difficult to give a precise definition of the term "3D user interface." Although most practitioners and researchers would say, "I know one when I see one," stating exactly what constitutes a 3D UI and which interfaces should be included and excluded is tricky.
3D User Interfaces: Theory and Practice (Bowman et al., 2005) defines a 3D user interface as simply "a UI that involves 3D interaction." This simply delays the inevitable, as we now have to define 3D interaction. The book states that 3D interaction is "human-computer interaction in which the user's tasks are performed directly in a 3D spatial context."
One key word in this definition is "directly." There are some interactive computer systems that display a virtual 3D space, but the user only interacts indirectly with this space—e.g., by manipulating 2D widgets, entering coordinates, or choosing items from a menu. These are not 3D UIs.
The other key idea is that of a "3D spatial context." The book goes on to make it clear that this spatial context can be either physical or virtual, or both. The most prominent types of 3D UIs involve a physical 3D spatial context, used for input. The user provides input to the system by making movements in physical 3D space or manipulating tools, sensors, or devices in 3D space, without regard for what this input is used to do or control. Of course, all input/interaction is in some sense in a physical 3D spatial context (a mouse and keyboard exists in 3D physical space), but the intent here is that the user is giving spatial input that involves 3D position (x, y, z) and/or orientation (yaw, pitch, roll) and that this spatial input is meaningful to the system.
Thus, the key technological enabler of 3D UIs of this sort is spatial tracking (Meyer et al., 1992; Welch & Foxlin, 2002). The system must be able to track the user's position, orientation, and/or motion to enable this input to be used for 3D interaction. For example, the Microsoft Kinect tracks the 3D positions of multiple body parts to enable 3D UIs, while the Apple iPhone tracks its own 3D orientation, allowing 3D interaction. There are many different technologies used for spatial tracking; we describe some of these in a later section.
This tracked spatial input can be used for iconic gestures, direct pointing at menu items, controlling characters in a game, specifying 3D shapes, and many other uses. 3D UIs based on spatial input can be found in a variety of settings: gaming systems, modeling applications, virtual and augmented reality systems, large screen visualization setups, and art installations, just to name a few.
The other type of 3D UI involves direct interaction in a virtual 3D spatial context. In this type, the user may be using traditional (non-3D) input devices or movements as inputs, but if those inputs are transformed directly into actions in the virtual 3D space, we still consider it to be 3D interaction. For example, the user might drag the mouse across a 3D model in order to paint it a certain color, or the user might draw a path through a 3D world using touch input.
In this , we are going to focus on the first type of 3D UI, which is based on 3D spatial input. While both types are important and have many applications, they involve different research issues and different technologies to a large degree. 3D spatial tracking has come of age recently, and based on this technological driver, 3D UI applications with spatial input have exploded. We discuss a few of these applications in more detail in the next section.
32.2 Applications of 3D UIs
Why is it important to understand and study 3D UIs? For many years, the primary application of 3D UIs was in high-end virtual reality (VR) and augmented reality (AR) systems. Since users in these systems were generally standing up, walking around, and limited in their view of the real world, traditional mouse- and keyboard-based interaction was impractical. Such systems were already using spatial tracking of the user's headthe correct view of the virtual world, it was natural to also design UIs that took advantage of spatial tracking as well. As we indicated above, however, recent years have seen an explosion of spatial input in consumer-level systems such as game consoles and smartphones. Thus, the principles of good 3D UIs design are now more important to understand than ever.
To further motivate the importance of 3D UI research, let's look in a bit more detail at some important technology areas where 3D UIs are making an impact on real-world applications.
32.2.1 Video Gaming
As we've already mentioned, most people today are aware of 3D UIs because of the great success of "motion gaming" systems like the Nintendo Wii, the Microsoft Kinect, and the Sony Move. All of these systems use spatial tracking to allow users to interact with games through pointing, gestures, and most importantly, natural movements, rather than with buttons and joysticks. For example, in an archery game a user can hold two tracked devices—one for the handle of the bow and the other for the arrow and string—and can pull back the arrow, aim, and release using motions very similar to archery in the real world.
The Wii and Move both use tracked handheld devices that also provide buttons and joysticks, while the Kinect tracks the user's body directly. There's a clear tradeoff here. Buttons and joysticks are still useful for discrete actions like confirming a selection, firing a weapon, or changing the view. On the other hand, removing encumbrances from the user can make the experience seem even more natural.
3D UIs are a great fit for video gaming (LaViola, 2008; Wingrave et al., 2010), because the emphasis is on a compelling experience, which can be enhanced with natural actions that make the player feel as if he is part of the action, rather than just indirectly controlling the actions of a remote character.
32.2.2 Very Large Displays
Recent years have seen an explosion in the size, resolution, and ubiquity of displays. So-called "display walls" are found in shopping malls, conference rooms, and even people's homes. Many of these displays are passive, simply presenting canned information to viewers, but more and more of them are interactive.
So how should one interact with these large displays? The traditional mouse and keyboard still work, but they are difficult to use in this context because users want to move about in front of the display, and because such large displays invite multiple users (Ball and North, 2005). Touch screens are another option, but that means that to interact with the display one has to stand within arm's reach, limiting the amount of the display that can be seen.
3D interaction is a natural choice for large display contexts. A tracked handheld device, the hand itself, or the whole body can be used as portable input that works from any location and makes sense for multiple users. The simplest example is distal pointing, where the user points directly at a location on the display (as with a laser pointer) to interact with it (Vogel & Balakrishnan, 2005; Kopper et al., 2010), but other techniques such as full-body gestures or viewpoint-dependent display can also be used.
32.2.3 Mobile Applications
Today's mobile devices, such as smartphones and tablets, are an interaction designer's playground, not only because of the rich design space for multi-touch input, but also because these devices incorporate some fairly powerful sensors for 3D spatial input. The combination of accelerometers, gyroscopes, and a compass give these devices the ability to track their own orientation quite accurately. Position information based on GPS and accelerometers is less accurate, but still present. These devices offer a key opportunity for 3D interaction design, however, because they are ubiquitous, they have their own display, and they can do spatial input without the need for any external tracking infrastructure (cameras, base stations, etc.).
Many mobile games are using these capabilities. Driving games, for example, use the "tilt to steer" metaphor. Music games can sense when the user is playing a virtual drum. And golf games can incorporate a player's real swing.
But "serious" applications can take advantage of 3D input for mobile devices as well. Everyone is familiar with the idea of tilting the device to change the interface from portrait to landscape mode, but this is only the tip of the iceberg. A tool for amateur astronomers can use GPS and orientation information to help the user identify stars and planets they point the device towards. Camera applications can not only record the location at which a photo was taken, but also track the movement of the camera to aid in the reconstruction of a 3D scene.
Perhaps the most prominent example of mobile device 3D interaction is in mobile AR. In mobile AR, the smartphone becomes a window through which the user can see not only the real world, but virtual objects and information as well (Höllerer et al., 1999; Ashley, 2008). Thus, the user can browse information simply by moving the device to view a different part of the real world scene. Mobile AR is being used for applications in entertainment, navigation, social networking, tourism, and many more domains. Students can learn about the history of an area; friends can find restaurants surrounding them and link to reviews; and tourists can follow a virtual path to the nearest subway station. Prominent projects like MIT's SixthSense (Mistry & Maes, 2009) and Google's Project Glass (Google, 2012) have made mobile AR highly visible. Good 3D UI design is critical to realizing these visions.
32.3 3D UI Technologies
As we discussed above, spatial tracking technologies are intimately connected to 3D UIs. In order to design usable 3D UIs, then, a basic understanding of spatial tracking is necessary. In addition, other input technologies and display devices play a major role in 3D UI design.
32.3.1 Tracking Systems and Sensors
Spatial tracking systems sense the position, orientation, linear or angular velocity, and/or linear or angular acceleration of one or more objects. Traditionally, 3D UIs have been based on six-degree-of-freedom (6-DOF) position trackers, which detect the absolute 3D position (location in a fixed XYZ coordinate system) and orientation (roll, pitch, and yaw in the fixed coordinate system) of the object, which is typically mounted on the head or held in the hand.
These 6-DOF position trackers can be based on many different technologies, such as those using electromagnetic fields (e.g., Polhemus Liberty), optical tracking (e.g., NaturalPoint OptiTrack), or hybrid ultrasonic/inertial tracking (e.g., Intersense IS900). All of these, however, share the limitation that some external fixed reference, such as a base station, a camera array, a set of visible markers, or an emitter grid, must be used. Because of this, absolute 6-DOF position tracking can typically only be done in prepared spaces.
Inertial tracking systems, on the other hand, can be self-contained and require no external reference. They use technologies such as accelerometers, gyros, magnetometers (compasses), or video cameras to sense their own motion—their change in position or orientation. Because they measure relative position and orientation, inertial systems can't tell you their absolute location, and errors in the measurements tend to accumulate over time, producing drift.
The "holy grail" of spatial tracking is a self-contained 6-DOF system that can track its own absolute position and orientation with high levels of accuracy and precision. We are getting closer to this vision. For instance, a smartphone can use its accelerometers, gyros, and magnetometer to track its absolute orientation (relative to gravity and the earth's magnetic field), and its GPS receiver to track its 2D position on the surface of the earth. However, GPS position is only accurate to within a few feet at best, and the height (altitude) of the phone cannot currently be tracked with any accuracy. For now, then, smartphones on their own cannot be used as a general-purpose 6-DOF input device.
A 6-DOF tracker with minimal setup requirements is the Sony Move system. Designed as a "motion controller" (although it really senses position) for the PlayStation game console, the Move uses the typical accelerometers and gyros to sense 3D orientation, and a single camera to track the 3D position of a glowing ball atop the device. This works surprisingly well, coming near to the accuracy of much more expensive and complex tracking systems, but does have the limitation that the user must be facing the camera and not blocking the camera's view of the ball. In addition, accuracy in the depth dimension is worse than in the horizontal and vertical dimensions.
Probably the best candidate for self-contained 6-DOF tracking is inside-out vision-based tracking, in which the tracked object uses a camera to view the world, and analyzes the changes in this view over time to understand its own motion (translations and rotations). Although this approach is inherently relative, such systems can keep track of "feature points" in the scene to give a sort of absolute tracking in a fixed coordinate system connected with the scene. Algorithms such as parallel tracking and mapping (PTAM) (Klein & Murray, 2007) are getting closer to making this a reality.
Three recent tracking developments deserve special mention, as they are bringing many new designers and researchers into the realm of 3D UIs. The first is the Nintendo Wii Remote. This gaming peripheral does not offer 6-DOF tracking, but does include several inertial sensors in addition to a simple optical tracker that can be used to move a cursor on the screen. Wingrave and colleagues (Wingrave et al, 2010) presented a nice discussion of how the Wii Remote differs from traditional trackers, and how it can be used in 3D UIs.
Second, the Microsoft Kinect (Figure 1) delivers tracking in a very different way. Rather than tracking a handheld device or a single point on the user's head, it uses a depth camera to track the user's entire body (a skeleton of about 20 points). The 3-DOF position of each point is measured, but orientation is not detected. And since it tracks the body directly, no "controller" is needed. Researchers have designed some interesting 3D interactions with Kinect (e.g., Wilson & Benko, 2010), but they are necessarily quite different than those based on single-point 6-DOF tracking.
Copyright status: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.
Third, the Leap Motion device, which has been announced but is not available at the time of this writing, promises to deliver very precise 3D tracking of hands, fingers, and tools in a small workspace. It has the potential to make 3D interaction a standard part of the desktop computing experience, but we will have to wait and see how best to design interaction techniques for this device. It will share many of the benefits and drawbacks of the Kinect, and although it is designed to support "natural" interaction, naturalism is not always possible, and not always the best solution (as we will discuss below).
For 3D interaction, spatial trackers are most often used inside handheld devices. These devices typically include other inputs such as buttons, joysticks, or trackballs, making them something like a "3D mouse." Like desktop mice, these can then be used for pointing, manipulating objects, selecting menu items, and the like. Trackers are also used to measure the user's head position and orientation. Head tracking is useful for modifying the view of a 3D environment in a natural way.
The type of spatial tracker used in a 3D UI can have a major impact on its usability, and different trackers may require different UI designs. For example, a tracker with higher latency might not be appropriate for precise object manipulation tasks, and an interface using a 3-DOF orientation tracker requires additional methods for translating the viewpoint in the 3D environment, since it does not track the user's position.
This short section can't do justice to the complex topic of spatial tracking. An older, but very good, overview of tracking technologies and issues can be found in Welch's paper (Welch & Foxlin, 2002).
32.3.2 Other Input Devices
While spatial tracking is the fundamental input device for 3D UIs, it is usually not sufficient on its own. As noted above, most handheld trackers include other sorts of input, because it's difficult to map all interface actions to position, orientation, or motion of the tracker. For example, to confirm a selection action, a discrete event or command is needed, and a button is much more appropriate for this than a hand motion. The Intersense IS900 wand is typical of such handheld trackers; it includes four standard buttons, a "trigger" button, and a 2-DOF analog joystick (which is also a button) in a handheld form factor. The Kinect, because of its "controller-less" design, suffers from the lack of discrete inputs such as buttons.
Generalizing this idea, we can see that almost any sort of input device can be made into a spatial input device by tracking it. Usually this requires adding some hardware to the device, such as optical tracking markers. This extends the capability and expressiveness of the tracker, and allows the input from the device to be interpreted differently depending on its position and orientation. For example, in my lab we have experimented with tracking multi-touch smartphones and combining the multi-touch input with the spatial input for complex object manipulation interfaces (Wilkes et al., 2012). Other interesting devices, such as bend-sensitive tape, can be tracked to provide additional degrees of freedom (Balakrishnan et al., 1999).
Gloves (or finger trackers) are another type of input device that is frequently combined with spatial trackers. Pinch gloves detect contacts between the fingers, while data gloves and finger trackers measure joint angles of the fingers. Combining these with trackers allow for interesting, natural, and expressive use of hand gestures, such as in-air typing (Bowman et al., 2002), writing (Ni et al., 2011), or sign language input (Fels & Hinton, 1997).
Much of the early work on 3D UIs was done in the context of interaction with VR systems, which use some form of "immersive" display, such as head-mounted displays (HMDs), surround-screen displays (e.g., CAVEs), or wall-sized stereoscopic displays. Increasingly, however, 3D interaction is taking place with TVs or even desktop monitors, due to the use of consumer-level tracking devices meant for gaming. Differences in display configuration and characteristics can have a major impact on the design and usability of 3D UIs.
HMDs (Figure 2) provide a full 360-degree surround (when combined with head tracking) and can block out the user's view of the real world, or enhance the view of the real world when used in AR systems. When used for VR, HMDs keep users from seeing their own hands or other parts of their bodies, meaning that devices must be usable eyes-free, and that users may be hesitant to move around in the physical environment. HMDs also vary widely in field of view (FOV). When a low FOV is present, 3D UI designers must use the limited screen real estate sparingly.
Copyright status: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.
CAVE-like displays (Cruz-Neira et al., 1993) may provide a full surround, but more often use two to four screens to partially surround the user. Among other considerations, for 3D UIs this means that the designer must provide a way for the user to rotate the world. The mixture of physical and virtual viewpoint rotation can be confusing and can reduce performance on tasks like visual search (McMahan, 2011).
3D UIs on smaller displays like TVs also pose some interesting challenges. With HMDs and CAVEs, the software field of view (the FOV of the virtual camera) is usually matched to the physical FOV of the display so that the view is realistic, as if looking through a window to the virtual world. With desktop monitors and TVs, however, we may not know the size of the display or the user's position relative to it, so determining the appropriate software FOV is difficult. This in turn may influence the user's ability to understand the scale of objects being displayed.
Finally, we know that display characteristics can affect 3D interaction performance. Prior research in my lab has shown, for example, that stereoscopic display can improve performance on difficult manipulation tasks (Narayan et al., 2005) but not on simpler manipulation tasks (McMahan et al., 2006).
32.4 Designing Usable 3D UIs
As a serious topic in HCI, 3D interaction has not been around very long. The seminal papers in the field were only written in the mid- to late-1990s, the most-cited book in the field was published in 2005, and the IEEE Symposium on 3D User Interfaces didn't begin until 2006.
Because of this, the level of maturity of 3D UI design principles lags behind those for standard GUIs. There is no standard 3D UI (and it's not clear that there could be, given the diversity of input devices, displays, and interaction techniques), and few well-established guidelines for 3D UI design. While general HCI principles such as Nielsen's heuristics (Nielsen & Molich, 1990) still apply, they are not sufficient for understanding how to design a usable 3D UI.
Thus, it's important to have specific design principles for 3D interaction. While the 3D UI book (Bowman et al., 2005) and several other works (Kulik, 2009; Gabbard, 1997; Kaur, 1999) have extensive lists of guidelines, here I've tried to distill what I feel are the most important lessons about good 3D UI design.
32.4.1 Understand the design space
Despite the youth of the field, there is a very large number of existing 3D interaction techniques for the so-called "universal tasks" of travel, selection, manipulation, and system control. In many cases, these techniques can be reused directly or with slight modifications in new applications. The lists of techniques in the 3D UI book (Bowman et al., 2005) are a good place to start; more recent techniques can be found in the proceedings of IEEE 3DUI and VR, ACM CHI and UIST, and other major conferences.
When existing techniques are not sufficient, new techniques can sometimes be generated by combining existing technique components. Taxonomies of technique components (Bowman et al., 2001) can be used as design spaces for this purpose.
32.4.2 There is still room to innovate
A wide variety of techniques already existsit is impossible to innovate in 3D UI design. On one hand, most of the primary metaphors for the universal tasks have probably been invented already. On the other hand, there are several reasons to believe that there are new, radically different metaphors than what we currently have.
First, we know the design space of 3D interaction is very large due to the number of devices and mappings available. Second, 3D interaction design can be magical—limited only by the designer's imagination. Third, new technologies (such as the Leap Motion device) with the potential for new forms of interaction are constantly appearing. For example, in a recent project in our lab, students used a combination of recent technologies (multi-touch tablet, 3D reconstruction, marker-based AR tracking, and stretch sensors) to enable "AR Angry Birds"—a novel form of physical interaction with both real and virtual objects in AR (Figure 3). Finally, techniques can be designed specifically for specialized tasks in various application domains. For example, we designed domain-specific interaction techniques for object cloning in the architecture and construction domain (Chen and Bowman, 2009).
Copyright status: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.
32.4.3 Be careful with mappings and DOFs
One of the most common problems in 3D UI design is the use of inappropriate mappings between input devices and actions in the interface. Zhai & Milgram (1993) showed, for instance, that elastic sensors (e.g., a joystick) and isometric sensors (e.g., a SpaceBall) map well to rate-controlled movements, where the displacement or force measured by the sensor is mapped to velocity of an object (including the viewpoint) in the virtual world, while isotonic sensors (e.g., a position tracker) map well to position-controlled movements, where the position measured by the sensor is mapped to the position of an object. When this principle is violated, performance suffers.
Similarly, there are often problems with the mappings of input DOFs to actions. When a high-DOF input is used for a task that requires a lower number of DOFs, task performance can be unnecessarily difficult. For example, selecting a menu item is inherently a one-dimensional task. If users need to position their virtual hands within a menu item to select it (a 3-DOF input), the interface requires too much effort.
Another DOF problem is the misuse of integral and separable DOFs. Jacob & Sibert (1992) showed that input devices with integral DOFs (those that are controlled all together, as in a 6-DOF tracker) should be mapped to tasks that users perceive as integral (such as 6-DOF object manipulation), while input devices with separable DOFs (those that can be controlled independently, such as a set of sliders) should be mapped to tasks that have sub-tasks users perceive as separable (such as setting the hue, saturation, and value of a color). A violation of this concept, for example, would be to use a six-DOF tracker to simultaneously control the 3D position of an object and the volume of an audio clip, since those tasks cannot be integrated by the user.
In general, 3D UI designers should seek to reduce the number of DOFs the user is required to control. This can be done by using lower-DOF input devices, by ignoring some of the input DOFs, or by using physical or virtual constraints. For example, placing a virtual 2D interface on a physical tablet prop (Schmalstieg et al., 1999) provides a constraint allowing users to easily use 6-DOF tracker input for 2D interaction.
32.4.4 Keep it simple
Although 3D UIs can be very expressive and can support complex tasks, not all tasks in a 3D UI need to use fully general interaction techniques. When the user's goal is simple, designers should provide simple and effortless techniques. For example, there are many general-purpose travel techniques that allow users to control the position and orientation of the viewpoint continuously, but if the user simply wants to move to a known landmark, a simple target-based technique (e.g., point at the landmark object) will be much more usable.
Reducing the number of DOFs, as described above, is another way to simplify 3D UIs. For instance, travel techniques can require only two DOFs if terrain following is enabled.
Finally, when using physical buttons or gestures to map to commands/functions, avoid the tendency to add another button or gesture for each new command. Users typically can't remember a large number of gestures, and remembering the mapping between buttons and functions becomes difficult after only 2-3 buttons are used.
32.4.5 Design for the hardware
In traditional UIs, we usually try to design without regard for the display or the input device (i.e., display- and device-independence). UIs should be just as usable no matter whether you are using a large monitor or a small laptop, with a mouse or a trackpad. This is not always strictly true—when you have a very large multi-monitor setup, for example. But in 3D UIs, what works on one display or with one device very rarely works exactly the same way on different systems.
We call this the migration issue. When migrating to a different display or device, the UI and interaction techniques often need to be modified. In other words, we need display- and device-specific 3D UIs.
For example, the World-in-Miniature (WIM) technique (Stoakley et al., 1995), which allows users to move virtual objects in a full-scale virtual environment by manipulating small "dollhouse" representations of those objects, was originally designed for an HMD with two handheld trackers for input. When we tried to migrate WIM to a CAVE (Bowman et al., 2007), we found performance to be significantly worse, probably because users found it difficult to fuse the stereo imagery when the virtual WIM was held close to their eyes. In addition, we had to add controls for rotating the world due to the missing back wall of the CAVE. More recently, we tried to migrate WIM to use the Kinect, and were not able to find any reasonable mapping that allowed users to easily manipulate both the WIM and the virtual hand with six DOFs.
32.4.6 You may still have to train users, but a little training can go a long way
3D interaction is often thought of as "natural," but for many novice users, effective operation of 3D UIs is anything but natural. Users in HMDs don't want to turn their heads, much less move their bodies. Moving a hand in two dimensions (parallel to a screen) is fine, but moving a hand towards or away from the screen doesn't come naturally. When using 3D travel techniques, users don't take advantage of the ability to fly, or to move sideways, or to walk through virtual walls (Bowman et al., 1999).
Because of this, we find that we often have to train our users before they become proficient at using even well designed 3D UIs. In most of the HCI community, the need for training or instruction is seen as a sign of bad design, but in the examples mentioned above, effective use requires users to go against their instincts and intuitions. If a minimal (one-minute) training session allows users to improve their performance significantly, we see that as both practical and positive.
32.4.7 Always evaluate
Finally, we suggest that all 3D UI designs should undergo formative, empirical usability evaluation with members of the target user population. While this guideline probably applies to all UIs, 3D UIs in particular are difficult to design well based on theory, principles, and intuition alone. Many usability problems don't become clear until users try the 3D UI. evaluate early and often.
32.5 Current 3D UI Research
In this final section, I want to highlight two of the interesting problems 3D UI researchers are addressing today.
32.5.1 Realism vs. Magic - The Question of Interaction Fidelity
One of the fundamental issues in 3D UI design is the tension between realistic and magical interaction. Many feel that 3D interaction should be as "natural" as possible, reusing and reproducing interactions from the real world so that users can take advantage of their existing skills, knowing what to do and how to do it. On the other hand, 3D UIs primarily allow users to interact with virtual objects and environments, whose only constraints are due to the skill of the programmer and the limits of the technology. Thus, "magic" interaction is possible, enabling the user to transcend the limitations of human perception and action, to reduce or eliminate the need for physical effort and lengthy operations, and even to perform tasks that are impossible in the real world.
This question is related to the concept of interaction fidelity, which we define as the objective degree with which the actions (characterized by movements, forces, body parts in use, etc.) used for a task in the UI correspond to the actions used for that task in the real world (Bowman et al., 2012). By talking about the degree of fidelity, we emphasize that we are not just talking about "realistic" and "non-realistic" interactions, but a continuum of realism, which itself has several different dimensions.
Consider an example. For the task of moving a virtual book from one location on a desk to another, we could, among many other options: a) map the movements of the user's real hand and fingers exactly, requiring exact placement, grasping, and releasing, b) position a 3D cursor over the book, press a button, move the cursor to the target position, and release the button, or c) choose "move" from a menu, and then use a laser pointer to indicate the book and the target location. Clearly, option a) is the most natural, option b) uses a natural metaphor but leaves out some of the less necessary details of the real-world interaction, and option c) has very low interaction fidelity. Option a) is probably the easiest for a novice user to learn and use, providing that the designer can replicate the actions and perceptual cues from the real world well enough, although option b) is the simplest and may be just as effective.
Some tasks are very difficult (or impossible) to do in the real world. What if I want to remove a building from a city? A highly natural 3D UI would require the user to obtain some virtual explosives or a virtual crane with a wrecking ball, and operate these over a long period of time. Here a "magic" technique, such as allowing the user to "erase" the building, or selecting the building and invoking a "delete" command by voice, is clearly more practical and effective.
Because techniques like Go-Go use natural metaphors to extend users' abilities beyond what's possible in the real world, we refer to them as hyper-natural. There is not a single answer to the question of whether to choose natural, hyper-natural, or non-natural magic techniques, but overall, research has shown significant benefits for the natural and hyper-natural design approaches (Bowman et al., 2012).
32.5.2 Increasing Precision
A major disadvantage of 3D UIs based on spatial tracking systems is the difficulty of providing precise 3D spatial input. The modern mouse is a highly precise, accurate, and responsive 2D spatial input device—users can point at on-screen elements, even individual pixels, quickly and accurately. 3D spatial tracking systems are far behind the mouse in terms of precision (jitter), accuracy of reported values, and responsiveness (latency), making it problematic to use them for tasks requiring precision (Teather et al., 2009).
But even if 3D spatial tracking systems improve their specifications to be comparable with today's mouse, 3D UIs will still have a precision problem, for the following reasons:
- 3D interaction is performed in the air, not on a surface. There is no friction or physical support to make movements more controlled and precise.
- Humans have a natural hand tremor that causes in-air movements to be jittery.
- Interfaces based on 3D pointing using ray-casting (i.e., laser pointer metaphor) amplify this hand tremor so that it becomes worse the farther out along the ray you go.
- 3D spatial trackers are not "parkable" like the mouse—the user cannot let go of them and be assured that they will stay in the same position.
So is there any hope of 3D UIs that can be used for precise work? A partial solution is to filter the output of 3D spatial trackers to reduce noise, but filtering can cause other problems, such as increased latency. Current research is addressing the precision problem using several different strategies.
One approach is to modify the control/display (C/D) ratio. The simple idea here is to use an N:1 mapping between movements of the input device (control) and movements in the system (display), where N is greater than one. In other words, if the C/D ratio is five, then a five-centimeter movement (or five-degree rotation) of the tracker would result in a one-centimeter movement (or one-degree rotation) in the virtual world. This gives users greater levels of control and the ability to achieve more precision, but at the cost of increased physical effort and time. Some techniques (e.g., Frees et al., 2007) dynamically modify the C/D ratio so that precision is only added when necessary (e.g., when the user is moving slowly).
A second strategy is to ensure that the user is not required to be more precise than absolutely necessary. For example, if the user is selecting a very small object in a sparse environment, there is no need to make the user touch or point to the object precisely. Rather, the cursor can have area or volume (e.g., a circle or sphere) instead of being a point (e.g., Liang & Green, 1994), or the cursor can snap to the nearest object (e.g., de Haan et al., 2005).
Finally, a promising approach called progressive refinement spreads out the interaction over time rather than requiring a single precise action. A series of rough, imprecise actions can be used to achieve a precise result, without a great deal of effort on the part of the user. For instance, the SQUAD technique (Kopper et al., 2011) allows users to select small objects in cluttered environments by first doing a volume selection, then refining the set of selected objects with a series of rapid menu selections. In very difficult cases, this technique was even faster than ray-casting, which uses a single precise selection, and in all cases, SQUAD resulted in fewer selection errors. This progressive refinement approach should be broadly applicable to many sorts of difficult 3D interaction tasks.
32.6 For Further Reading
- For an overview of the field of 3D UIs, and a comprehensive survey of devices and interaction techniques, see 3D User Interfaces: Theory and Practice (Bowman et al., 2005).
- The best current research in the field can be found in the proceedings of the IEEE Symposium on 3D User Interfaces.
- For more on how to use realism and magic in 3D UI design, see a recent tutorial in IEEE Computer Graphics & Applications (Kulik, 2009).
- Wolfgang Stuerzlinger provides a set of practical guidelines from his years of experience in 3D UI design in a recent survey paper (Bowman et al., 2008).
- To learn more about experimental results on the effects of interaction fidelity in 3D UIs, see my recent Communications of the ACM paper (Bowman et al., 2012).
Balakrishnan, Ravin, Fitzmaurice, George W., Kurtenbach, Gordon and Singh, Karan (1999): Exploring interactive curve and surface manipulation using a bend and twist sensitive input strip. In: SI3D 1999 1999. pp. 111-118
Bowman, Doug A., Johnson, Donald B. and Hodges, Larry F. (2001): Testbed Evaluation of Virtual Environment Interaction Techniques. In Presence: Teleoperators and Virtual Environments, 10 (1) pp. 75-95
Bowman, Doug A., Wingrave, Chadwick A., Campbell, J. M., Ly, V. Q. and Rhoton, C. J. (2002): Novel Uses of Pinch GlovesTM for Virtual Environment Interaction Techniques. In Virtual Reality, 6 (3) pp. 122-129
Bowman, Doug A., Davis, Elizabeth Thorpe, Hodges, Larry F. and Badre, Albert N. (1999): Maintaining Spatial Orientation during Travel in an Immersive Virtual Environment. In Presence: Teleoperators and Virtual Environments, 8 (6) pp. 618-631
Bowman, Doug A., Coquillart, Sabine, Froehlich, Bernd, Hirose, Michitaka, Kitamura, Yoshifumi, Kiyokawa, Kiyoshi and Stürzlinger, Wolfgang (2008): 3D User Interfaces: New Directions and Perspectives. In IEEE Computer Graphics and Applications, 28 (6) pp. 20-36
Bowman, Doug A., Badillo, Brian and Manek, Dhruv (2007): Evaluating the Need for Display-Specific and Device-Specific 3D Interaction Techniques. In: Shumaker, Randall (ed.) ICVR 2007 - Virtual Reality - Second International Conference - Part 1 July 22-27, 2007, Beijing, China. pp. 195-204
Chen, Jian and Bowman, Doug A. (2009): Domain-Specific Design of 3D Interaction Techniques: An Approach for Designing Useful Virtual Environment Applications. In Presence: Teleoperators and Virtual Environments, 18 (5) pp. 370-386
Cruz-Neira, Carolina, Sandin, Daniel J. and DeFanti, Thomas A. (1993): Surround-screen projection-based virtual reality: the design and implementation of the CAVE. In: Proceedings of the 20st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1993 1993. pp. 135-142
Feiner, Steven K., MacIntyre, Blair and Seligmann, Doree Duncan (1992): Annotating the real world with knowledge--based graphics on a see--through head--mounted display. In: Graphics Interface 92 May 11-15, 1992, Vancouver, British Columbia, Canada. pp. 78-85
Fels, S. S. and Hinton, G. E. (1997): Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls. In IEEE Transactions on Neural Networks, 8 (5) pp. 977-984
Gabbard, Joseph L. (1997). Taxonomy of Usability Characteristics in Virtual Environments. (M.S. Thesis). Virginia Tech.
Google (2012). Project Glass. Retrieved 7 November 2012 from Google: https://plus.google.com/+projectglass/posts.
Haan, Gerwin de, Koutek, Michal and Post, Frits H. (2005): IntenSelect: Using Dynamic Object Rating for Assisting 3D Object Selection. In: Kjems, Eric and Blach, Roland (eds.) Proceedings of the 9th Int. Workshop on Immersive Projection Technology - 11th Eurographics Workshop on Virtual Environments - IPT-EGVE 20052005, Aalborg, Denmark. pp. 201-209
Höllerer, Tobias, Feiner, Steven, Terauchi, Tachio, Rashid, Gus and Hallaway, Drexel (1999): Exploring MARS: developing indoor and outdoor user interfaces to a mobile augmented reality system. In Computers & Graphics, 23 (6) pp. 779-785
Jacob, Robert J. K. and Sibert, Linda E. (1992): The Perceptual Structure of Multidimensional Input Device Selection. In: Bauersfeld, Penny, Bennett, John and Lynch, Gene (eds.) Proceedings of the ACM CHI 92 Human Factors in Computing Systems Conference June 3-7, 1992, Monterey, California. pp. 211-218
Kaur, K. (1999). Designing Virtual Environments for Usability. Doctoral Dissertation. University College, London
Klein, Georg and Murray, David W. (2007): Parallel Tracking and Mapping for Small AR Workspaces. In: Sixth IEEE/ACM International Symposium on Mixed and Augmented Reality, ISMAR 2007, 13-16 November 2007, Nara, Japan 2007. pp. 225-234
Kopper, Regis, Bowman, Doug A., Silva, Mara G. and McMahan, Ryan P. (2010): A human motor behavior model for distal pointing tasks. In International Journal of Human-Computer Studies, 68 (10) pp. 603-615
Kopper, Regis, Bacim, Felipe and Bowman, Doug A. (2011): Rapid and accurate 3D selection by progressive refinement. In: Proceedings of the 2011 IEEE Symposium on 3D User Interfaces 2011. pp. 67-74
McMahan, Ryan Patrick (2011). Exploring the Effects of Higher-Fidelity Display and Interaction for Virtual Reality Games. (Ph.D. dissertation). Virginia Tech.
McMahan, Ryan P., Gorton, Doug, Gresock, Joe, McConnell, Will and Bowman, Doug A. (2006): Separating the effects of level of immersion and 3D interaction techniques. In: Slater, Mel, Kitamura, Yoshifumi, Tal, Ayellet,Amditis, Angelos and Chrysanthou, Yiorgos (eds.) VRST 2006 - Proceedings of the ACM Symposium on Virtual Reality Software and Technology November 1-3, 2006, Limassol, Cyprus. pp. 108-111
Mistry, Pranav and Maes, Pattie (2009): SixthSense: a wearable gestural interface. In: Oda, Yuko and Tanaka, Mariko (eds.) International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ASIA 2009, Yokohama, Japan, December 16-19, 2009, Art Gallery and Emerging Technologies Adaptation 2009. p. 85
Narayan, Michael, Waugh, Leo, Zhang, Xiaoyu, Bafna, Pradyut and Bowman, Doug A. (2005): Quantifying the benefits of immersion for collaboration in virtual environments. In: Singh, Gurminder, Lau, Rynson W. H.,Chrysanthou, Yiorgos and Darken, Rudolph P. (eds.) VRST 2005 - Proceedings of the ACM Symposium on Virtual Reality Software and Technology November 7-9, 2005, Monterey, CA, USA. pp. 78-81
Ni, Tao, Bowman, Doug A. and North, Chris (2011): AirStroke: bringing unistroke text entry to freehand gesture interfaces. In: Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011. pp. 2473-2476
Nielsen, Jakob and Molich, Rolf (1990): Heuristic evaluation of user interfaces. In: Carrasco, Jane and Whiteside, John (eds.) Proceedings of the ACM CHI 90 Human Factors in Computing Systems Conference 1990, Seattle, Washington,USA. pp. 249-256
Poupyrev, Ivan, Billinghurst, Mark, Weghorst, Suzanne and Ichikawa, Tadao (1996): The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR. In: Kurlander, David, Brown, Marc and Rao, Ramana (eds.) Proceedings of the 9th annual ACM symposium on User interface software and technologyNovember 06 - 08, 1996, Seattle, Washington, United States. pp. 79-80
Rekimoto, Jun (1998): Multiple-Computer User Interfaces: A Cooperative Environment Consisting of Multiple Digital Devices. In: Streitz, Norbert A., Konomi, Shin'ichi and Burkhardt, Heinz Jürgen (eds.) Cooperative Buildings, Integrating Information, Organization, and Architecture, First International Workshop, CoBuild98, Darmstadt, Germany, February 1998, Proceedings 1998. pp. 33-40
Stoakley, Richard, Conway, Matthew and Pausch, Randy (1995): Virtual Reality on a WIM: Interactive Worlds in Miniature. In: Katz, Irvin R., Mack, Robert L., Marks, Linn, Rosson, Mary Beth and Nielsen, Jakob (eds.)Proceedings of the ACM CHI 95 Human Factors in Computing Systems Conference May 7-11, 1995, Denver, Colorado. pp. 265-272
Teather, Robert J., Pavlovych, Andriy, Stuerzlinger, Wolfgang and MacKenzie, I. Scott (2009): Effects of tracking technology, latency, and spatial jitter on object movement. In: Proceedings of the 2009 IEEE Symposium on 3D User Interfaces 2009. pp. 43-50
Vogel, Daniel and Balakrishnan, Ravin (2005): Distant freehand pointing and clicking on very large, high resolution displays. In: Proceedings of the 2005 ACM Symposium on User Interface Software and Technology 2005. pp. 33-42
Wilkes, Curtis B., Tilden, Dan and Bowman, Doug A. (2012): 3D User Interfaces Using Tracked Multi-touch Mobile Devices. In: Joint Virtual Reality Conference of ICAT - EGVE - EuroVR 2012 2012.
Wilson, Andrew D. and Benko, Hrvoje (2010): Combining multiple depth cameras and projectors for interactions on, above and between surfaces. In: Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010. pp. 273-282
Wingrave, Chadwick A., Williamson, Brian, Varcholik, Paul, Rose, Jeremy, Miller, Andrew, Charbonneau, Emiko,Bott, Jared N. and LaViola, Joseph J. (2010): The Wiimote and Beyond: Spatially Convenient Devices for 3D User Interfaces. In IEEE Computer Graphics and Applications, 30 (2) pp. 71-85
Premium literature on UX design
Enjoy unlimited downloads of our literature as an IDF member:
- iPad/tablet-optimized version and PDF version of all our online textbooks written by 100+ leading designers, bestselling authors and Ivy League professors.
- Self-service export to all popular formats, such as ePub.
- Pre-publication access to all textbooks - read them before everybody else. |
Images courtesy of the Centers for Disease Control and Prevention (CDC)
Ticks are blood-sucking arthropods that can transmit a wide variety of diseases such as Lyme disease, Rocky Mountain spotted fever, tick-borne relapsing fever (TBRF), tularemia, babesiosis, anaplasmosis, and erlichiosis. Lyme disease is the most common tick-borne disease in Placer County.
- Ticks can be found most commonly in grassy, brushy, or wooded areas, especially along sides of trails.
- Ticks do not fly, jump, or fall out of trees! Ticks wait on tips of grasses and leaves for people or other animals to pass by. When a tick grabs on to a passing animal, it will then crawl in search of a good place to attach to the skin.
- Once attached, the tick will secrete a cement-like substance that helps it stay in place to feed.
- The longer the tick stays attached, the higher the risk of disease transmission to the animal it is attached to.
- A feeding tick can remain attached for many hours or days, after which it will drop off the host.
Tick Species of Concern in Placer County
Western Black-Legged Tick
This tick is usually found in areas with high humidity from October to July. Larvae and nymphs feed on small animals like rodents and lizards. Adults feed on larger mammals including humans and deer. This tick is the primary vector for Lyme disease in Placer County.
Pacific Coast Tick
This tick is usually found in areas with high humidity from November to June. Larvae and nymphs feed on small rodents while adults feed on large mammals, especially deer. This tick a vector for Rocky Mountain spotted fever.
American Dog Tick
This tick is usually found from May to August. Larvae and nymphs feed on smaller mammals, while adults feed on larger mammals, especially dogs. This tick is a vector for Rocky Mountain spotted fever.
This tick looks different than the others because it is a member of the soft-tick family. This tick is usually found in mountain cabins and other dwellings. Their primary hosts are rodents, but these ticks will also bite humans, and are a primary vector for tick-borne relapsing fever in Placer County. The image on the left shows this tick in its normal state (A) and its engorged state (B).
Protect Yourself and Your Family from Ticks
- Protect yourself from ticks. Wear light-colored clothing to make it easier to see them if they are on your clothes and tuck your pants into your socks when you are walking, hiking, or working in tick areas. Repellents containing at least 20% DEET will repel ticks as well as mosquitoes. Discourage ticks around your house by keeping grass mowed, cutting back dense vegetation, and removing debris piles.
- Perform regular tick checks. Check your entire body for ticks for several days after you have been in tick habitat. Pay close attention to the hairline, waistline, armpits, and other places where clothing is constricted. Carefully examine children and pets.
- Remove attached ticks immediately. Removing ticks promptly can reduce the risk of transmission of Lyme disease and other tick-borne diseases.
- Seek medical attention if you become ill after a tick bite.
The Ds of Tick-borne Disease Prevention
- DEET – use a formulation of 20% DEET or higher if you will be in tick habitat
- Dress protectively by covering as much exposed skin with clothing, wear long pants and sleeves, and tuck pant legs into socks
- Discourage ticks from around your home by clearing debris and dense vegetation
- Do regular tick checks for several days after being in tick habitat
- Detach ticks immediately using the proper technique:
Proper Tick Removal
- Do not squish, burn, smother, or twist ticks.
- Use tweezers to grasp the head of the tick as close to the skin as possible, and pull straight out.
- Use gloves, tissue or other barrier if you must use your fingers to remove the tick.
- Wash your hands and the bite site with soap and water after tick removal.
A localized reaction or infections can occur where the tick was attached. If redness or pain develops at the bite site, consult your doctor. |
The Handwriting Program for Print workbook provides large models for tracing that allow students to use arm muscles to learn letters. Small models are provided on 1 1/2-inch lined practice paper to facilitate the transition to regular writing paper.
- Both large and small letter models to be traced and copied.
- Large models which exaggerate the differences between letter forms (like r and n).
- Strategies for eliminating letter reversals.
- Names and symbols for each of the lines to help teach proper letter placement.
- A one-stroke method that prepares children for cursive writing.
- Techniques for children to write and spell words as they learn to write new letters |
Known as: Giant Panda, Panda.
Estimated numbers left in the wild: 1,000 (possibly as many as 3,000).
Description: Weighing in at 136 kilograms and measuring 1.2 to 1.5 meters long, the giant panda is a black and white bear of China whose unique image is perhaps the most recognizable in the animal kingdom today. These solitary, territorial bears live in cool, wet bamboo forests at an altitude of several thousand meters, and are currently confined to central China.
The giant panda and the bamboo plant are inseparable – in fact, one of the panda’s wrist bones has elongated to provide a sixth toe on each fore paw, which serves as a thumb while grasping stalks of bamboo. Due to the low nutritive value of bamboo, a panda spends around twelve hours daily eating, consuming up to 14 kilograms of the vegetation to sate its appetite. Pandas remain omnivorous and eat a few rodents and birds, but these are only occasional supplements to their daily bamboo salad.
Though pandas are able to climb and swim, they tend to avoid steep slopes because of the high energy demands these impose. Each panda needs at least two species of bamboo in its home range so that it does not starve when one species reaches the end of its yearly cycle and dies back. The panda’s rounded, friendly-looking face is due to huge jaw muscles and enlarged molars needed to grind bamboo into a digestible pulp. A wild panda usually lives around two decades, though those in captivity can live for up to 30 years.
Though they are actual bears and display some bear-like behaviours, such as clawing the bark of trees to mark territorial boundaries, giant pandas do not hibernate or even establish a permanent den. In cooler months, they move lower down the slopes to areas with warmer temperatures. Hollow trees and rock crevices are both favoured as temporary resting spots.
Location: The giant panda is found only in central China in the wild. This extremely rare bear lives only in mountainous bamboo forests where its favourite food is abundant.
Threats: The giant panda is very vulnerable to modern day threats because of its limited range, exacting dietary needs, and extremely slow rate of breeding. Habitat destruction and pollution have caused major damage to the panda population, but poaching was and still is the strongest threat to the species. Early, ham-handed efforts at conservation also contributed to the species’ decline. Panda pelts are naturally valuable on the black market, giving an incentive for poaching these unique animals. Demand for these panda skins was particularly strong in Japan and Hong Kong.
Conservation efforts: Heavy conservation efforts, including wholesale removal of humans from the giant panda’s remaining habitat, appears to be paying off with a slowly increasing population. There may be as many as 3,000 giant pandas in the wild today, and vigorous international conservation efforts continue. Gun control in the regions where pandas are found has also proven effective at reducing the numbers that fall to poaching and other forms of illegal hunting.
Giant Panda Videos
Bear Trust International
Bear Trust International is an American organisation which works to protect different bear species around the world and their habitats through education, research, management and habitat conservation.
Hauser Bears is a UK based charity with a mission to change peoples attitudes towards bears. Their main work revolves around research and education to ensure a future for all bear species.
PDXWidlife collaborates with local organisations on three different continents to conserve endangered species like the Giant Panda and it’s habitat through research and outreach programmes. |
The problem of and solutions to climate change. The imperatives of transition on the eve of the Bali meet
There has been a rapid increase of co2 concentrations in the atmosphere over the past 250 years: from 280 parts per million (ppm) to 379 ppm. All greenhouse gases add up to the equivalent of 430 ppm of co2. That this increase has had a warming effect because of heat energy trapped in the atmosphere is clear when one compares the well known hockey-stick graph of emissions with corresponding global temperature change.
There is a direct correlation between co2 build-up and temperature increase. The Earth has warmed by 0.7c since around 1900; 11 of the last 12 years (1995-2006) have been the warmest since temperatures were measured (1850). The world saw nearly stable temperatures for around 1,000 years and then a sharp increase since 1800.
The fact that climate change is real, that it is happening and that its impacts are devastating millions is no longer news. In its fourth synthesis report, the Intergovernmental Panel on Climate Change ( ipcc) has told us that the "warming of the climate system is unequivocal as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level". Clearly, the science of climate change has to be accepted even if its politics is still contested.
The only question that remains open is whether current science is underestimating the urgency and impact of climate change. ipcc is seen as conservative and cautious. Because of the time lag between its reports, it is feared that what we know today may already be out of date. Its current assessment does not take into account dramatic recent evidence, including the shrinking of the Arctic ice cap, news that Greenland is losing its mass faster than anticipated, a surge in atmospheric concentration of co2 and an apparent slowing of the Earth's ability to absorb greenhouse gases.
Taken together, it could well be that the climate is reaching its 'tipping' point, which will further accelerate changes in the years to come. |
New study explains mysterious source of greenhouse gas methane in the ocean
For decades, marine chemists have faced an elusive paradox. The surface waters of the world's oceans are supersaturated with the greenhouse gas methane, yet most species of microbes that can generate the gas can't survive in oxygen-rich surface waters. So where exactly does all the methane come from? This longstanding riddle, known as the "marine methane paradox," may have finally been cracked thanks to a new study from the Woods Hole Oceanographic Institution (WHOI).
According to WHOI geochemist Dan Repeta, the answer may lie in the complex ways that bacteria break down dissolved organic matter, a cocktail of substances excreted into seawater by living organisms.
In a paper released in the November 14, 2016 issue of the journal Nature Geoscience, Repeta and colleagues at the University of Hawaii found that much of the ocean's dissolved organic matter is made up of novel polysaccharides—long chains of sugar molecules created by photosynthetic bacteria in the upper ocean. Bacteria begin to slowly break these polysaccharides, tearing out pairs of carbon and phosphorus atoms (called C-P bonds) from their molecular structure. In the process, the microbes create methane, ethylene, and propylene gasses as byproducts. Most of the methane escapes back into the atmosphere.
"All the pieces of this puzzle were there, but they were in different parts, with different people, in different labs, at different times," says Repeta. "This paper unifies a lot of those observations."
Methane is a potent greenhouse gas, and it is important to understand the various sources of methane in the atmosphere. The research team's findings describe a totally new pathway for the microbial production of methane in the environment, that is very unlike all other known pathways.
Leading up to this study, researchers like Repeta had long suspected that microbes were involved in creating methane in the ocean, but were unable to identify the exact ones responsible.
"Initially, most researchers looked for microbes living in isolated low-oxygen environments, like the guts of fish or shrimp, but they pretty quickly realized that couldn't be a major factor. Too much oxygenated water flows through there," says Repeta. Many researchers also examined flocculent material—snowy-looking bits of animal excrement and other organic material floating in ocean waters. "Some of those also have low-oxygen conditions inside them," he says, "but ultimately they didn't turn out to be a major methane source either."
In 2009, one of Repeta's co-authors, David Karl, found an important clue to the puzzle. In the lab, he added a manmade chemical called methylphosphonate, which is rich in C-P bonds, to samples of seawater. As he did, bacteria within the samples immediately started making methane, proving that they were able to take advantage of the C-P bonds provided by the chemical. Since methylphosphonate had never been detected in the ocean, Repeta and his team reasoned that bacteria in the wild must be finding another natural source of C-P bonds. Exactly what that source was, however, remained elusive.
After analyzing samples of dissolved organic matter from surface waters in the northern Pacific, Repeta ran into a possible solution. The polysaccharides within it turned out to have C-P bonds identical to the ones found in methylphosphonate—and if bacteria could break down those molecules, they might be able to access the phosphorus contained within it.
To confirm this idea, Repeta and his team incubated seawater bacteria under different conditions, adding nutrients such as glucose and nitrate to each batch. Nothing seemed to help the bacteria produce methane—until, that is, they added pure polysaccharides isolated from seawater. Once those were in the mix, the bacteria's activity spiked, and the vials began spitting out large amounts of methane.
"That made us think it's a two-part system. You have one species that makes C-P bonds but can't use them, and another species that can use them but not make them," he says.
Repeta and another co-author, Edward DeLong, a microbial oceanographer at the University of Hawaii, then began to explore how bacteria metabolize dissolved organic matter. Using a process called metagenomics, DeLong catalogued all the genes he could find in a sample of seawater from the north Pacific. In the process, he found genes responsible for breaking apart C-P bonds, which would allow bacteria to wrench phosphorus away from carbon atoms. Although DeLong was not certain which bacteria could actually do this, one thing was clear: If the gene was active, it would give an organism access to an important but rare nutrient in seawater.
"The middle of the ocean is a nutrient-limited system," says Repeta. "To make DNA, RNA, and proteins, you need nitrogen and phosphorus, but in the open ocean, those nutrients are at such low concentrations that they're almost immeasurable." Instead of using free-floating nutrients in the water, Repeta says, DeLong's study showed that the microbes must somehow be able to crack into nitrogen and phosphorus hidden deep inside organic molecules.
Although Repeta's latest paper confirms that it is indeed possible for bacteria to break apart C-P bonds, he notes that it's still not a particularly easy means of getting nutrients. With phosphorus tied up in organic molecules, it can be exceedingly difficult for bacteria to reach. If microbes can find other sources of the nutrient, he says, they will inevitably use those first.
"Think of it like a buffet," Repeta says. "If you're a microbe, inorganic nutrients are like fruits and meats and all the tasty stuff that you reach for immediately. Organic nutrients are more like leftover liver. You don't really want to eat it, but if you're hungry enough, you will. It takes years for bacteria to get around to eating the organic phosphorus in the upper ocean. We don't exactly know why, but there's another really interesting story there if we can figure it out." |
Here is what a new study has found.
Washington: A gene that may play a protective role in preventing heart disease has been discovered.
The UCLA-led study revealed that the gene, called MeXis, acts within key cells inside clogged arteries to help remove excess cholesterol from blood vessels.
It found that MeXis controls the expression of a protein that pumps cholesterol out of cells in the artery wall.
MeXis is an example of a "selfish" gene, one that is presumed to have no function because it does not make a protein product. However, recent studies have suggested that these so-called "unhelpful" genes can actually perform important biological functions without making proteins and instead producing a special class of molecules called long non-coding RNAs, or lncRNAs.
"What this study tells us is that lncRNAs are important for the inner workings of cells involved in the development of heart disease," said senior author Peter Tontonoz. "Considering many genes like MeXis have completely unknown functions, our study suggests that further exploring how other long non-coding RNAs act will lead to exciting insights into both normal physiology and disease."
In the study, researchers found that mice lacking MeXis had almost twice as many blockages in their blood vessels compared to mice with normal MeXis levels. In addition, boosting MeXis levels made cells more effective at removing excess cholesterol.
In the next phase of the study, researchers will further explore how MeXis affects the function of cells in the artery wall and will test various approaches to altering MeXis activity. The researchers are interested in finding out if MeXis could be targeted for therapy of cardiovascular disease.
"The idea that lncRNAs are directly involved in very common ailments such as plaque buildup within arteries offers new ways of thinking about how to treat and diagnose heart disease," said lead author Tamer Sallam. "There is likely a good reason why genes that make RNAs rather than proteins exist. A key question for us moving forward is how they may be involved in health and disease."
The study is published in the journal Nature Medicine. |
The Zika virus is a disease that is spread to people primarily through the bite of an infected Aedes species mosquito. The illness is usually mild with symptoms — fever, rash, joint pain, and conjunctivitis (red eyes) — lasting for several days to a week. People usually don’t get sick enough to go to the hospital, and they very rarely die of Zika.
However, the Zika virus is now known to cause birth defects when pregnant women become infected through mosquito or sexual transmission. This has led to an ongoing collaboration between federal, state and local agencies to raise awareness about and prevent the spread of the Zika virus.
In response, the federal Centers for Disease Control (CDC), along with state and local health departments, are monitoring the spread of Zika virus, and have issued advisories for people traveling to certain countries
There is no vaccine against the Zika virus. Because of this risk, pregnant women should take extra precautions as outlined on this downloadable guide. Travelers returning from regions with ongoing Zika transmission, such as the Caribbean and Central and South America, may have been exposed to the virus. Marylanders who have questions about how their travel histories might affect their risk are advised to consult their physicians.
The best way to reduce the risk and spread of mosquito-borne diseases like Zika is to control mosquitoes in your own backyard.
You can also learn to learn how to build your own Zika Prevention Kit to reduce your risk of getting Zika.
Your kit should include:
- A bed net
- Insect repellent
- Permethrin spray
- Standing water treatment tabs |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Tobacco Plants Contribute Key Ingredient For COVID-19 Vaccine
Historically, tobacco plants are responsible for their share of illness and death. Now they may help control the COVID-19 pandemic.
Two biotech companies are using the tobacco plant, Nicotiana benthamiana, as bio-factories to produce a key protein from the coronavirus that can be used in a vaccine.
"There's obvious irony there," says James Figlar, executive vice president for research and development for R.J. Reynolds Tobacco. Reynolds owns Kentucky BioProcessing, one of the companies working on a COVID-19 vaccine from plants.
"If you wanted to be cynical about it, you could," he says. "But we tend to think of it as like at the end of the day, the tobacco plant in and of itself is still just a plant."
Vaccines work by tricking people's immune system into believing it's been exposed to a virus so it can fight the virus off, if the real thing should ever turn up.
There are various ways to do that. One is to introduce something that looks like a virus to the immune system, but isn't infectious. That's the approach Kentucky Bioprocessing is using.
To make its vaccine, the company starts with tobacco seeds that they plant in a greenhouse. When the plants are approximately 25 days old, they're dipped into a solution containing agrobacteria. These are microorganisms that infect plants. In this case, they've been modified to contain instructions for making a protein from the coronavirus. The plants take up those instructions.
Seven days after being exposed to the agrobacteria, "we harvest the plant, go through an extraction and purification process, and at the end of the cycle, we have 99.9 percent pure protein, says company president Hugh Haydon.
A separate set of plants produces a tiny particle for packaging the viral protein.
"Once each of those components has been manufactured and purified separately, we chemically attach them to each other," Haydon says.
The result is something that can be injected into a human as a vaccine — and will prompt an immune response that should, in theory, protect someone from dying from COVID-19.
"To all intents and purposes, it looks like a virus," says Bruce Clark, CEO of Medicago, a Canadian biotech company that's also using tobacco plants to make a vaccine.
"So when it presents to the body, it looks and generates a response like a virus, but it has no genetic material inside," so it can't actually infect someone, Clark says.
Medicago has already begun testing its vaccine candidate in humans. Results from the initial studies are expected soon.
Kentucky Bioprocessing's COVID-19 vaccine won't be ready for initial testing in humans for several weeks yet. Even if the vaccine isn't one of the first to be approved, it may have advantages over some of the other vaccines. For example, it can be stored at normal refrigeration temperatures, and may even be stable at room temperature, making it easier to distribute.
Besides, says Haydon, "There will be other public health challenges. And the more that we can learn as a company, the better prepared we are for what comes next."
Plant biologist Kathleen Hefferon agrees plants could play an important role in the future of medicine.
"There are lots of examples of a plant made versions of therapeutic proteins, and so this is just another place where I think plants can make their mark."
Out of the greenhouse, and into the clinic.
Copyright 2020 NPR. To see more, visit https://www.npr.org. |
Regular composting, also known as cold composting, involves placing a variety of organic materials in a compost bin, enclosure, or even just in a large heap, and leaving it there until it breaks down several months later. It’s a very slow process and typically takes 6 to 12 months. It can be sped up by turning the compost, that is, moving around the material at the bottom of the heap to the top and vice versa to mix it up and get more oxygen in there, but it’s still a long wait. But there’s a better way to do composting…
The Difference Between Hot and Cold Composting
The other approach to composting is hot composting, which produces compost in a much shorter time. It will effectively destroy disease pathogens (such as powdery mildew on pumpkin leaves), weed seeds, weed roots (such as couch and kikuyu) and weeds which reproduce through root bulbs (such as oxalis). This process breaks down the material much better to produce a very fine compost.
By comparison, the slower cold composting methods will NOT kill disease pathogens or weed seeds and roots, so if this compost is put into the garden it may spread weeds and plant diseases, hence the common advice not to (cold) compost diseased plants.
The other issue with cold composting is that it produces a coarser compost, with lots of large pieces of the original materials left over in the compost when the process is completed, whereas hot compost looks like fine black humus (soil), and none of the original materials are distinguishable.
Hot composting is a fast aerobic process (uses oxygen), so given volume of compost materials produce almost the same volume of finished compost. In contrast, cold composting is slow anaerobic process (without oxygen), it’s a different chemical process, and as a result, nitrogen and carbon are lost to the atmosphere, which causes a reduction in the volume of compost to 20% of the original volume.
The Berkeley Hot Composting Method
The hot composting method, known as the Berkeley method, developed by the University of California, Berkley, is a fast, efficient, high-temperature, composting technique which will produce high quality compost in only 18 days.
The requirements for hot composting using the Berkley method are as follows:
- Compost temperature is maintained between 55-65 °C (131-149 °F)
- The C:N (carbon:nitrogen) balance in the composting materials is approximately 25-30:1
- The compost heap needs to be 1m x 1m (3′ x 3′) wide and roughly 1.5m (5′) high
- If composting material is high in carbon, such as tree branches, they need to be broken up, with a mulcher for example
- Compost is turned from outside to inside and vice versa to mix it thoroughly
With the 18-day Berkley method, the procedure is quite straightforward and can be summarised into three basic steps:
- Build compost heap
- 4days – no turning
- Then turn every 2nd day for 14 days
Detailed, step -by-step instructions of the Berkeley hot composting method are provided later in this article, but before we can begin composting , we need to get the right mix of materials into our compost!
Getting the Best Composting Material Carbon-Nitrogen Balance
In all composting, including the Berkeley hot composting method, the ratio of carbon to nitrogen in the compost materials needs to be between 25 to 30 parts carbon to one-part nitrogen by weight. This is because the bacteria responsible for the composting process require these two elements in those proportions to use as nutrients to construct their bodies as they grow, reproduce and multiply.
Materials that are high in carbon are typically dry, “brown” materials, such as sawdust, cardboard, dried leaves, straw, branches and other woody or fibrous materials that rot down very slowly.
Materials that are high in nitrogen are typically moist, “green” materials, such as lawn/grass clippings, fruit and vegetable scraps, animal manure and green leafy materials that rot down very quickly.
Many composting ingredients don’t have the ideal carbon to nitrogen ratio of 25-30:1. To make composting work, we get around this problem by mixing high carbon materials which break down very slowly, with high nitrogen materials which decompose very quickly, in order to create the right balance.
The nitrogen content of composting materials is denoted by the carbon to nitrogen ratio (C:N ratio) assigned to them, as detailed in the tables in the next section. Before we examine those, let’s have look some quick examples to understand how C:N ratios work..
- Materials high in nitrogen, which decompose very quickly, such as fish, which have a C:N ratio of 7:1, have a very low C:N ratio .
- Materials low in nitrogen, which break down very slowly, and need to be broken up to be used, such as tree branches, which have a C:N ratio of of 500:1, have a very high C:N ratio
The rationale for mixing ingredients is as follows.
If the C:N ratio in our composting materials is too high, meaning we don’t have enough nitrogen and too much carbon, we can lower the C:N ratio by adding manure or grass clippings, which are high in nitrogen.
If the C:N ratio in our composting materials is too low, meaning we have too much nitrogen, we you can raise the C:N ratio by adding cardboard, dry leaves, sawdust or wood chips, which are high in carbon.
When trying to understand C:N ratios, it may helpful to point out that all plants have more carbon than nitrogen in them (remember, they get their carbon from the carbon dioxide in the air) so that’s why the C:N ratios of plant material is always greater than 20:1.
Below are the average C:N ratios for some common organic materials used for composting
Carbon-Nitrogen (C:N) Ratios of Common Composting Materials
Here is a handy list of composting materials with their respective carbon to nitrogen, or C:N ratios.
The materials at the top of the list contain higher amounts of carbon, but are low in nitrogen, and are considered ‘browns‘.
As we move down the list, the nitrogen content increases, and the materials at the bottom of the list contain higher amounts of nitrogen, and are considered ‘greens‘.
|Browns = High Carbon||C:N|
|Greens = High Nitrogen||C:N|
What Materials Can Be Composted?
Anything that was once living can be hot composted – and I really do mean anything. All manner of things, including unusual items such as wool and cotton clothing, bones, leather boots (with leather soles).
Some farmers who use the hot compost method even place a fresh animal roadkill into their hot compost heaps (they have to go in the very centre of the hot compost heap to break down properly) because they are a high nitrogen source, and they find nothing but clean bones when the compost is ready. Not a good idea for urban areas though!
It’s best to use a variety of different ingredients in the compost, as this provides an input of a wider range of nutrients, and produces a richer compost.
There are many organic materials that can be composted, and there are also certain ingredients that should never be put into a compost bin. This is subject is a whole article in itself, so if you want more information, here is a link to a list of what materials should and shouldn’t go into your compost bin.
The Easiest Way to Mix Compost Materials for the Right C:N Ratio
Some gardeners are perfectionists and try to use some very complex mathematics to calculate the exact proportions of each ingredient they’re using to arrive at the ideal C:N ration of 25-30:1 by weight. This is totally unnecessary, and there’s a very simple alternative that works great, which is a measure by volume.
The One Bucket Greens, Two Buckets Browns Method
If ratios seem too complicated or confusing (which they are), you can work with volumes of ingredients instead to simplify things.
- Use 1/3 ‘greens’ (nitrogen containing) materials with 2/3 ‘browns’ (dry carbon materials).
Or to put it another way, which may be easier to understand:
- Add one bucket of nitrogen-rich material to every two buckets of dry carbon-containing material.
For example, using this method we could use 1/3 Manure and 2/3 dry carbon materials to start a hot compost pile and it will work. Alternating thin layers of greens and browns are laid down until the compost heap is 1 metre (3 foot) square and a bit taller than that.
There’s no real need to get caught up in the mathematics of precise C:N ratios for succesful hot composting. It’s more a matter of trying out the process by following the instructions below, and it really is quite easy.
Hot Composting in 18 Days, Step By Step Instructions
The following instruction detail the steps required to build a Berkeley hot composting system which will produce finished compost in around 18 days.
DAY 1 – Construct Compost Pile, Let it Sit for 4 days
- Mix together ingredients by laying then in alternating thin layers of “greens” and “browns”.
- Wet the compost heap down very well so it is dripping water out of the bottom and is saturated.
- Let the compost pile sit for 4 days (this day and three more days), don’t turn it.
- Tip: A compost activator such as comfrey, nettle or yarrow plants, animal or fish material, urine, or old compost, can be placed in the middle of compost heap to start off composting process.
DAY 5 – Turn Compost Pile, Let it Sit for a Day
- Turn the compost heap over, turning the outside to the inside, and the inside to the outside. To explain how to do this, when turning the compost, move the outside of the pile to a spot next to it, and keep moving material from the outside to the new pile. When the turning is completed, all the material that was inside the pile will be outside and vice versa.
- Ensure that moisture stays constant. To test, put gloves on and squeeze a handful of the compost materials, which should only release one drop of water, or almost drips a drop.
- On the next day, let the compost pile sit, don’t turn it.
- TIP: If the compost pile gets too wet, spread it down, or open a hole about 7-10cm (3-4”) wide with the handle of the pitchfork, or put sticks underneath for drainage.
DAY 7 & DAY 9 – Measure Temperature, Turn Compost, Let it Sit for a Day
- Measure the temperature at the core of the compost heap.The compost heap should reach its maximum temperature on these days. As an simple guideline, if a person can put their arm into the compost up to the elbow, then it is not at 50 degrees Celsius, and is not hot enough. Best to use a compost thermometer or a cake thermometer for this purpose.The hot composting process needs to reach an optimum temperature of 55-65 °C (131-149 °F).At temperatures over 65 °C (149 °F), a white “mould” spreads through the compost, which is actually some kind of anaerobic thermophilic composting bacteria, often incorrectly referred to as ‘fire blight’. This bacteria appears when the compost gets too hot, over 65 °C and short of oxygen, and it disappears when the temperature drops and aerobic composting bacteria take over once again.Temperature peaks at 6-8 days and gradually cools down by day 18.
- Turn the compost heap over every second day (on day 7 and again on day 9).
- Allow the compost to rest for on the next day after turning it.
- TIP: If the compost pile starts coming down in size quickly, there is too much nitrogen in the compost.
- TIP: To heat up the compost faster, a handful of blood & bone fertiliser per pitchfork when turning speeds it up.
- TIP: If it gets too hot and smelly and goes down in size, it has too much nitrogen, need to slow it down, throw in a handful of sawdust per pitchfork when turning.
DAY 11, 13, 15 and 17 – Turn Compost, Let it Sit for a Day
- Continue to turn the compost every 2nd day (on days 11, 13, 15 and again on day 17).
- Allow the compost to rest for a day after turning it.
DAY 18 – Compost Completed, Ready to Harvest
- Harvest completed compost, which will be warm, dark brown, and smell good.
- Congratulate yourself for a job well done!
- TIP: When the earthworms move into the compost, it’s a sign that it’s finished and ready, because it’s cooled down enough for them and they’re in there because it’s full of nutrients!
Some important points to note:
- Locate the compost heap in an area which is protected from too much sun to prevent the compost from drying out, or from heavy rain to avoid water-logging, as both extreme conditions will slow down the composting process.
- Space required for for your heap should be about 1.5 x 1.5 metres (5′ x 5′), and enough space in front of it to stand when turning the compost.
- Water each layer until it is moist as you build the heap. After three or four days, give the compost air by mixing and turning it over, then turn every two days until the compost is ready, usually in 14-21 days. Remember, frequent turning and aeration is the secret of successful composting.
- Turn the compost using a garden fork, or even better, a long-handled pitchfork.
- In cold or wet weather, cover the compost heap with a tarp or plastic sheet, to prevent the rain cooling it down, since the water will penetrate into the core of the compost pile. Even though cold outside air will cool the surface, but not the core of the compost heap, by covering it, this prevents some heat loss from the surface to cooler outside air, and retains the heat within the compost heap better.
Is My Garden Too Small for Hot Composting?
A full–sized hot compost pile can be made successfully in a small courtyard, I know from experience!
The first time I tried hot composting was assisting a friend with only a small courtyard in a rental property, who had never tried this process before. For composting materials, he gathered a wheelie bin full of fallen leaves from his local street, a second wheelie bin full of weeds from his garden, and he also purchased a small straw bale for the sake of it. I also helped him collect a few garbage bags of cow manure from an urban farm. It took us under an hour to pile up all the materials in reasonably thin layers of less than 5cm (2″) to build the compost heap.
Even though it was his first attempt at hot composting, and in around 18 days, he had over 1 cubic metre of rich, dark, compost to use in his garden. None of the original ingredients could be identified in the final product either, it had a very fine consistency. Best of all, it cost him next to nothing – the straw bale was the only item purchased, and that was more of a gratuitous addition, as the hot compost would have worked just as well without it.
Considering that a hot compost pile doesn’t really reduce in volume, the biggest issue in small yards and gardens is figuring out what to do with such a large volume of high-quality compost!
Ways to Use Compost in the Garden
Wondering what to do with over a cubic metre of freshly made compost?
- It can be used to improve your soil by digging it through your garden beds.
- Don’t like digging? Use the compost to start a no-dig garden with the no-dig gardening method, which is my personal preference!
- Compost should be always mixed into the soil to improve drainage in heavy clay soils, and to improve water retention in sandy soils when planting new trees.
These are just a few ideas to get things started. Happy composting! |
Dance embodies one of our most primal relationships to the universe. It is pre-verbal, beginning before words can be formed. It is innate in children before they possess command over language and is evoked when thoughts or emotions are too powerful for words to contain.
Children move naturally. They move to achieve mobility, they move to express a thought or feeling, and they move because it is joyful and feels wonderful. When their movement becomes consciously structured and is performed with awareness for its own sake, it becomes dance.
Dance is a natural method for learning and a basic form of cultural expression. Children learn movement patterns as readily as they learn language. Just as all societies create forms of visual representation or organize sounds into music, all cultures organize movement and rhythm into one or more forms of dance. Yet, while our educational systems for early childhood include drawing and singing, they often neglect to include dance. It is essential that education provide our children with the developmental benefits and unique learning opportunities that come from organizing movement into the aesthetic experience of dance.
To read more, visit the link above. |
A child with a language disorder may have difficulty in any of the following areas:
- Receptive Language: Child may exhibit difficulty understanding directions or demonstrating an understanding of a concept
- Expressive Language: Child may exhibit difficulty expressing his wants and needs, or may exhibit difficulty answering questions
- Pragmatic Language: Child may exhibit difficulty demonstrating an understanding of higher level language concepts, such as the use of idioms, jokes, etc.
Children with language disorders often have difficulty effectively communicating with others and at times can cause significant frustration. |
Reinforced Concrete Frame
Reinforced concrete frames comprise of flat components (beams) and vertical components (columns) associated by rigid joints. These designs are projected solidly that is, beams and columns are projected in a solitary activity to act as one.
RC frames give protection from both gravity and horizontal burdens through bowing in beams and columns.
Concrete Frame Construction
Concrete frame construction is a construction strategy which contains an organization of columns and beams to move the heaps going onto the design to the establishment effectively. Wholistically, it shapes a structural skeleton for the structure which is utilized to help different individuals like Floors, Roof, Walls, and Claddings.
Concrete Frame Construction Details
1. Columns in Framed Structure
Columns are an important primary individual from a frame building. They are the upward members which convey the loads from the shaft and upper columns and move it to the footings.
The loads conveyed might be axial or eccentric. Plan of columns is more important than the plan of beams and pieces. This is on the grounds that, in the event that one shaft comes up short, it’ll be a nearby disappointment of one floor, yet in the event that one segment falls flat, it can prompt the breakdown of the entire structure.
2. Beams in Framed Structure
Beams are the horizontal burden bearing members of the framed structure. They convey the loads from sections and furthermore the immediate loads of brick work dividers and their self-loads.
The beams might be supported on different beams or might be supported by columns forming an integral piece of the frame. These are basically the flexural members. They are ordered into 2 kinds :
- Main Beams – Transmitting floor and secondary bar loads to the columns.
- Secondary Beams – Transmitting floor loads to the main beams.
3. Slab in Framed Structure
A slab is a horizontal flat surface used to protect a building from the elements and provide shelter for its occupants. These are the plate elements, which flexure to carry the loads. They are usually used to transport vertical loads.
Because of their huge moment of inertia, horizontal loads can carry huge wind and earthquake forces and then transfer them to the beam.
4. Foundation in Framed Structure
The foundation’s main purpose is to transfer the load from the above-mentioned columns and beams to the solid ground.
5. Shear Walls in Framed Structure
In high-rise buildings, those were crucial structural features. Shear walls are really very massive columns that appear to be walls instead of columns due to their size. They are responsible for horizontal loads such as wind and earthquakes.
Vertical loads are also carried by shear walls. It’s critical to realise that they only operate for horizontal loads with one orientation, which would be the axis of the wall’s long dimension.
6. Elevator Shaft in Framed Structure
The elevator shaft is a vertical concrete box that allows the elevator to travel vertically and horizontally. Such shafts assist in the resistance of horizontal loads as well as the carrying of vertical loads.
Concrete Building Construction
1. Estimation of Materials
Estimating materials is the process of measuring the quantity or proportion of materials such as cement, aggregates, water, and other ingredients in a concrete mix by weight or volume.
2. Preparing the Site
The most critical aspect of laying concrete is soil preparation. The earth must be well-drained and compacted before the concrete can be poured. Preparing the subsoil properly reduces the likelihood of cracks in the finished concrete construction.
3. Building Formwork
In building, formwork is being used to support structures and serve as moulds for the creation of structures so out concrete placed into the moulds. Molds constructed of steel, wood, aluminium, or prefabricated forms can be used to create formwork.
The building of formwork takes some time and can able to price to 20% to 25% of the structure’s total cost, if not more. Stripping refers to the process of eliminating the formwork. Panel forms are reused forms, while stationary forms are non-reusable.
4. Placing of Reinforcement
Reinforcement will be precisely positioned and sufficiently upheld before concrete is put, and will be gotten against dislodging. Reinforcement ought to be set as demonstrated on the putting drawings which shows the quantity of bars, bar lengths, twists, and positions. The cover is likewise important to guarantee that the steel bonds to the concrete alright to foster its solidarity.
5. Mixing of Concrete
Appropriate mixing of concrete fixing is a lot of essential as it influences the nature of concrete in its new state just as in the solidified state. A concrete is supposed to be very much blended in the event that it satisfies the accompanying necessities.
The concrete blend ought to be uniform in shading. Concrete ought to accomplish legitimate consistency for which it is planned. The complete mixing of every single concrete fixing. Cement glue should cover all the outside of the total.
6. Pouring of Concrete
The blended concrete is shipped to the site of cementing inside the underlying set time. The poring of concrete can be different strategies, for example, siphoning and manual moving. the poured concrete is compacted utilizing a vibrator for guaranteeing appropriate compaction. Care should be taken to forestall air pockets and guarantee an even surface.
7. Finishing of Concrete
The most fundamental kind of concrete completion is a smooth surface made using tirades and scoops. Following the concrete has been set in structures, the tirade is utilized to even out the concrete surface. Tirades often comprise of long bits of metal or wood that are pulled and pushed across the concrete surface to eliminate overabundance of concrete and fill in holes in the concrete surface.
8. Curing of Concrete
Curing of concrete is the last and quite possibly the main exercises needed to be taken during the time spent concrete development. Curing of concrete is the way toward keeping the concrete soggy to empower it to acquire original capacity. This last advance assumes an extremely critical part in concrete exhibition and needs full and moment consideration.
Frame construction is a structure procedure which includes building a steady framework of studs, joists, and rafters, and joining all the other things to this framework. This structure style can be cultivated rapidly with a gifted team, and it is incredibly regular everywhere on the world.
The cycle of frame construction begins with developing a ledge on the ground, with the ledge being appended to an establishment. Long studs are joined to the ledge at set stretches to make an organization which can be connected to the joists and rafters which make up the rooftop or extra stories. The frame might be furthermore upheld with cross propping and different procedures. Basically, frame-style construction makes a skeleton, and a quick group can frame a house in a couple of days.
When the frame is finished, dividers and different highlights can be added. The construction develops dynamically more steady as hardened ground surface and dividers are added, making extra help and protection from the components. Inside the construction, the developers can separate from basic primary dividers, which offer help to protect the structure, and segments which can be utilized to gap and change the state of different spaces inside the design for utility.
Platform frame construction, in which a design is assembled floor by floor, is the most well-known sort of this construction style. Some more seasoned structures use balloon frame construction, in which long joists run right from the ledge to the top plate, which meets the rooftop, regardless of how tall the structure is. For functional reasons, balloon frame construction is generally restricted to a few stories, and it is extraordinary to see in new designs, due to timber accessibility issues.
Traditionally, frame construction is cultivated with wood, which should be painstakingly sliced and handled to guarantee that the uprightness of the frame is kept up. Wood which has not been restored as expected, for instance, will foster warping and twisting which could haul the design crooked. Metal shafts can likewise be utilized in outlining, and they can reduce expenses essentially in regions where timber is costly.
Also Read: What Is Lap Length? | Lap Length in Beam | Why Lapping Is Provided? | How to Calculate Lap Length? | Lap Length as Per Is Code 456 | What Are the General Rules for Lap Length? | Lapping Zone
Types of Frame
Here, the different types of frames are as follows.
1. Rigid Frame System
Rigid frame systems, also known as moment frame systems, are made up of linear elements such as beams and columns. The term rigid refers to a person’s capacity to resist deformation. It can be found in steel and reinforced concrete structures. The absence of pinned joints within the frame distinguishes rigid frames, which are often statically indeterminate.
The bending of beams and columns in a rigid frame can resist both vertical and lateral loads. The rigid frame’s stiffness is primarily provided by the bending rigidity of rigidly connected beams and columns. The joints must be constructed to have enough strength and stiffness while exhibiting minimal deformation.
Internal forces and moments, as well as support responses, can be solved using structural analysis methods such as the portal method (approximate), the method of virtual work, Castigliano’s theorem, the force method, the slope-displacement method, the stiffness method, and matrix analysis.
2. Braced Frame System
To resist lateral stresses, braced frames are made up of beams and columns that are “pin” connected by bracing. This style of frame is easy to evaluate and put together. Both horizontal and vertical bracing are used to provide resistance to lateral forces.
Knee bracing, diagonal bracing, X bracing, K or chevron bracing, and shear walls that resist lateral stresses in the plane of the wall are only a few examples. This frame technology is more effective in terms of earthquake and wind resistance. It performs better than a rigid frame system. |
Influenza, commonly known as “the flu”, is an infectious disease caused by the influenza virus. Symptoms can be mild to severe. The most common symptoms include: a high fever, runny nose, sore throat, muscle pains, headache, coughing, and feeling tired. These symptoms typically begin two days after exposure to the virus and most last less than a week. The cough, however, may last for more than two weeks. In children there may be nausea and vomiting but these are not common in adults. Nausea and vomiting occur more commonly in the unrelated infection gastroenteritis, which is sometimes inaccurately referred to as “stomach flu” or “24-hour flu”. Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.
Usually, the virus is spread through the air from coughs or sneezes.This is believed to occur mostly over relatively short distances. It can also be spread by touching surfaces contaminated by the virus and then touching the mouth or eyes. A person may be infectious to others both before and during the time they are sick. The infection may be confirmed by testing the throat, sputum, or nose for the virus.
Influenza spreads around the world in a yearly outbreak, resulting in about three to five million cases of severe illness and about 250,000 to 500,000 deaths. In the Northern and Southern parts of the world outbreaks occur mainly in winter while in areas around the equator outbreaks may occur at any time of the year. Death occurs mostly in the young, the old and those with other health problems. Larger outbreaks known as pandemics are less frequent. In the 20th century three influenza pandemics occurred: Spanish influenza in 1918, Asian influenza in 1958, and Hong Kong influenza in 1968, each resulting in more than a million deaths. The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June of 2009. Influenza may also affect other animals, including pigs, horses and birds.
Frequent hand washing reduces the risk of infection because the virus is inactivated by soap. Wearing a surgical mask is also useful. Yearly vaccinations against influenza is recommended by the World Health Organization in those at high risk. The vaccine is usually effective against three or four types of influenza. It is usually well tolerated. A vaccine made for one year may be not be useful in the following year, since the virus evolves rapidly. Antiviral drugs such as the neuraminidase inhibitors oseltamivir among others have been used to treat influenza. Their benefits in those who are otherwise healthy do not appear to be greater than their risks. No benefit has been found in those with other health problems. |
The hypothalamus is a small but an important area of the brain. It’s located at the base of the brain, above the pituitary gland. It plays an important role in hormone production and controls many important processes in the body.
In humans, the hypothalamus is approximately the size of an almond and less than 1% of the brain’s weight.
Functions of the Hypothalamus
One of the major functions of the hypothalamus is to maintain your body’s internal balance, which is known as homeostasis.
Homeostasis means a healthful, stable, balanced body condition.
Therefore, to maintain homeostasis, the hypothalamus controls many of your bodily functions, including:
- releasing hormones
- body temperature
- Heart rate and blood pressure
- Sleep cycles
- managing of sexual behavior
- regulating emotional responses
- Production of substances that influence the pituitary gland to release hormones.
As different systems and parts of the body send signals to the brain, they alert the hypothalamus about any unbalanced factors that need attention. The hypothalamus then responds by releasing the right hormones into the bloodstream to balance the body.
- One example of this is the ability of a human being to maintain an internal temperature of 98.6 °Fahrenheit (ºF).
Anatomy of the Hypothalamus
The hypothalamus can be divided into three main regions.
- Anterior region
- Middle region
- Posterior region
Each region contains several nuclei (neuron clusters). These clusters of neurons perform vital functions, such as releasing hormones.
1. Anterior region: This region of the hypothalamus is also known as the supraoptic region. The main nuclei of anterior region include the supraoptic and paraventricular nuclei. There are several other smaller nuclei in the anterior region as well.
In addition, the supraoptic nucleus functions as the main source of vasopressin hormone, also known as the antidiuretic hormone (ADH), which plays a key role in the absorption of salts and glucose and maintaining water balance in your body.
Kindly note, the nuclei in the anterior region are largely involved in the secretion of various hormones. Many of these hormones interact with the pituitary gland to produce additional hormones.
2. Middle region: This region of the hypothalamus is known as the tuberal region. The main nuclei of middle region is the ventromedial and arcuate nuclei.
The ventromedial nucleus controls the appetite, while the arcuate nucleus is involved in releasing growth hormone-releasing hormone (GHRH). GHRH stimulates the pituitary gland to produce growth hormone. This is responsible for the growth and development of the body.
3. Posterior region: This area is also called the mammillary region. The main nuclei of posterior region include the posterior hypothalamic nucleus and mammillary.
The posterior hypothalamic nucleus helps regulate body temperature by causing shivering and blocking sweat production.
Hormones of the Hypothalamus
The hypothalamus is responsible for creating and controlling many hormones in the body. It works with the pituitary gland, which makes and sends other important hormones around the body.
Moreover, the hypothalamus uses bloodstream to communicate with the pituitary gland. These connections of the hypothalamus are called the endocrine connections.
Kindly note, when the hypothalamus receives a signal from the nervous system, it secretes hormones known as neurohormones. Theses neurohormones, further activates the pituitary gland, to start and stop the release of hormones in the body.
Together, the hypothalamus and pituitary gland control many other glands that produce hormones of the body. For example: Adrenal cortex, Gonads, and Thyroid gland.
Important hormones secreted by the hypothalamus include:
- Anti-diuretic hormone (ADH): This hormone increases water absorption into the blood by the kidneys.
- Corticotropin-releasing hormone (CRH): This hormone sends a signal to the pituitary gland, to further stimulate the adrenal glands to produce corticosteroids. Corticosteroids helps to regulate metabolism and immune response.
- Gonadotropin-releasing hormone (GnRH): GnRH instructs the pituitary gland to release the reproductive hormones, such as follicle stimulating hormone (FSH) and luteinizing hormone (LH), which work together to ensure normal functioning of the ovaries and testes.
- Growth hormone-releasing hormone (GHRH): GHRH instructs the pituitary gland to release the growth hormone (GH). In children, GH is essential to maintaining a healthy body composition.
- Oxytocin: This hormone controls many important sexual and social behavior, such as orgasm, trust, body temperature, sleep cycles and the release of a mother’s breast milk.
- Prolactin-releasing hormone (PRH): PRH tell the pituitary gland to either start or stop breast milk production in lactating mothers.
- Thyrotropin releasing hormone (TRH): TRH activates the pituitary gland to produce thyroid stimulating hormone (TSH). TSH regulates metabolism, energy, heart rate, growth and development.
- Somatostatin: Somatostatin works to stop the pituitary gland from releasing certain hormones, including growth hormones and thyroid-stimulating hormones.
If the hypothalamus is not functioning properly, this is known as hypothalamus disorder.
Kindly note, these disorders are very hard to diagnose because the hypothalamus and pituitary gland are so tightly connected that, it’s often difficult for doctors to understand whether the disease is associated with the hypothalamus or pituitary gland.
As it is difficult for doctors to diagnose a specific, incorrectly functioning gland, these disorders are often called hypothalamic-pituitary disorders.
However, there are some hormone tests that make clear the root cause of the disease.
Several conditions that cause hypothalamus disorders, including:
- Head injuries
- Surgery involving the brain
- Brain tumors
- Tumors in or around the hypothalamus
- Eating disorders, such as anorexia or bulimia
- Excessive bleeding
- Certain genetic disorders, such as growth hormone deficiency
- Birth defects involving the brain
- Autoimmune conditions
Hypothalamus disorders plays a role in many conditions, including:
- Hypopituitarism: It is a disorder in which your pituitary gland doesn’t produce enough hormones. It is usually caused by damage to the pituitary gland, however, hypothalamus disorder can also cause it. Because, many hormones produced by the hypothalamus, directly affect those produced by the pituitary gland.
Furthermore, the hormone deficiencies can affect number of your body’s routine functions, such as growth, blood pressure or reproduction.
- Diabetes insipidus: This is an uncommon disorder that causes an imbalance of fluids in the body. When your hypothalamus doesn’t produce and release enough vasopressin hormone, the kidneys can remove too much water. This causes increased urination and thirst.
There’s no cure for diabetes insipidus. But treatments can relieve your thirst and decrease your urine output.
- Prader-Willi syndrome: This is a rare genetic disorder. It causes the hypothalamus to not register, when someone is full after eating.
People with PWS have a constant urge to eat, which leads to obesity. Additional symptoms include a slower metabolism and decreased muscle.
Symptoms of Hypothalamus disorders
Symptoms that indicate hypothalamus disorders include:
- Sensitivity to heat
- Weight gain or loss
- Difficulty sleeping
- Frequent urination
- Lack of sex drive
- Fluctuations in body temperature
- High or low blood pressure
- Constant thirst
- Delayed puberty
Tests for Hypothalamus disorders
If your doctor suspects a problem, he or she will perform a physical examination and ask about your symptoms. In addition, your doctor may also order Blood or urine tests to check hormone levels in your body such as:
- Pituitary hormones
- Growth hormone
Other possible tests include:
- Hormone injections followed by timed blood samples
- MRI or CT scans of the brain
How to make Hypothalamus healthy?
Some hypothalamus conditions are unavoidable, however, there are a few things you can do to keep it healthy.
1.) Eat a healthy diet: Eating a healthy diet is important for hypothalamus. Healthy dietary choices to support the hypothalamus include:
- Fruits and Vegetables: Both fruits and vegetables contain lots of vitamins, minerals and antioxidants that is beneficial for the hypothalamus.
- Vitamins B1- A good source of vitamin B1 is actually Sunflower Seeds. You should put approximately a handful of sunflower seeds to your daily meal to boost up your hypothalamus health.
Moreover, pork and whole grain are also good source of Vitamin B1.
- Vitamin C: It plays an important role in brain functions associated with hypothalamus. Vitamin C is also beneficial to protect your Hypothalamus from toxins.
Foods rich in vitamin C include lemons, oranges, grapefruits, strawberries and red bell peppers.
2.) Sleep Enough: When you get enough sleep, it keep your hypothalamus working properly.
3.) Exercise regularly: Like eating a healthy diet and getting enough sleep, a regular exercise also boosts your overall health. Therefore, a regular exercise can also improve your hypothalamus functions.
Kindly note, even a mild amount of regular exercise can improve your hypothalamus function. |
Hydrocephalus occurs when excess cerebrospinal fluid (CSF) builds up in the brain. Too much CSF causes the ventricles of the brain to expand, increasing pressure and causing damage.
Hydrocephalus can be congenital or acquired. Congenital hydrocephalus happens in the womb from conditions such as spina bifida (when the spine doesn’t properly form) or a brain malformation such as Aqueductal Stenosis, Arachnoid cysts, Porencephaly and Dandy-Walker syndrome.
Acquired hydrocephalus can occur at any age. It can be caused by stroke, brain tumour, meningitis, intracranial bleeding, head injury and other unknown (idiopathic) causes. While there are treatments available to help manage hydrocephalus, there is no permanent cure. This page is specifically about acquired hydrocephalus.
Symptoms of acquired hydrocephalus include:
- Chronic headaches* that may not be relieved by pain medication
- Cognitive challenges or changes in cognitive performance
- Decline in academic or work performance
- Difficulty waking up from sleep
- Irritability/ personality changes
- Loss of consciousness, fainting
- Loss of coordination, motor performance or balance problems, including gait disturbances: clumsiness, difficulty walking on uneven surfaces and stairs
- Tiredness or difficulty staying awake
- Visual problems; blurred or double vision
- Vomiting/nausea (especially projectile in children)
In diagnosing Normal Pressure Hydrocephalus, doctors look for a telltale triad of symptoms occurring together along with increases in the size of the ventricles in the brain: mild cognitive impairment, gait disturbances and urinary incontinence.
*Headaches experienced by children and adults are often at the front of the head on both sides. They are generally severe upon waking or following a nap and may be relieved by sitting up.
Diagnosis and treatment of hydrocephalus
Hydrocephalus is most often diagnosed through computed tomography (CT) or magnetic resonance imaging (MRI) scans, neurological examinations, lumbar punctures, and other tests. Once a diagnosis of hydrocephalus has been made, there are two options for treatment. When hydrocephalus needs to be treated, the person will either have surgery to create a small hole in the third ventricle in the brain to restore CSF flow (endoscopic third ventricle (ETV) surgery) or a surgery to implant a shunt in the ventricle that is experiencing the excessive CSF. These treatments help divert the excess CSF away from the brain. There are several different types of shunts available, and the neuro team will make recommendations based on the person’s specific condition.
Since implanting the shunt is brain surgery, a neurosurgeon will be performing the procedure and be a part of creating the after-surgery care plan. It will involve a period of close-monitoring and a lot of rest mixed with appropriate activity. Maintaining and managing hydrocephalus and shunts will be a long-term process: doctors will be on the lookout for infections and malfunctioning shunts.
Effects of hydrocephalus
Hydrocephalus in adults can be caused by a brain injury and it can cause some of the same effects as brain injury. The shunt placement can also lead to some effects such as headaches or nausea.
Not everyone will experience the same effects, but they can include:
- Attention and memory deficits
- Auditory changes
- Fine motor skill challenges
- Muscle weakness and spasticity to mild imbalance
- Sensitivity to external pressures (for example weather)
- Vision changes
These changes can be challenging for the person with hydrocephalus but with your support and access to local rehabilitation and recovery services, they will be able to develop coping and management techniques.
- More information on behavioural effects of brain injury
- More information on cognitive effects of brain injury
- More information on brain injury and physical effects/mobility
Rehabilitation after hydrocephalus
Along with ongoing medical checkups to make sure treatment is progressing safely, the person with hydrocephalus will most likely need to undergo rehabilitation for any physical, mental or cognitive changes to their abilities.
- Work with a neuropsychologist
- A neuropsychologist will explain what effects hydrocephalus and brain injury can have on a person’s abilities and personality. They will predict progress over the short and long-term. This will be an ongoing process as conditions change.
- Work with additional rehabilitation specialists
- Occupational therapists, physical therapists, and cognitive behavioural therapists are all specialists that can help with independent living, adapting abilities, and learning.
Hydrocephalus in seniors
Hydrocephalus can develop in older adults, and is called adult onset hydrocephalus. The causes of hydrocephalus at this age are like the causes of hydrocephalus for all ages (stroke, head injury, intracranial bleeding, meningitis, etc.) While many instances of hydrocephalus come with high intracranial pressure, adults over the age of 60 may develop a form called normal pressure hydrocephalus.
Normal pressure hydrocephalus (NPH)
Normal pressure hydrocephalus (NPH) occurs when the ventricles in the brain become enlarged with CSF, but there is no increase in intracranial pressure. Because of this, it’s often mistakenly diagnosed as early dementia, Parkinson’s or Alzheimer’s because it shares the same effects. The diagnosis is confirmed with computed tomography (CT) or magnetic resonance imaging (MRI) scans and treated the same way as any other form of hydrocephalus – by waiting and watching, surgically implanting a shunt or performing an ETV.
Resources and research
- Hydrocephalus Canada
- Spina Bifida and Hydrocephalus Association of Canada
- Local brain injury associations
Disclaimer: There is no shortage of web-based online medical diagnostic tools, self-help or support groups, or sites that make unsubstantiated claims around diagnosis, treatment and recovery. Please note these sources may not be evidence-based, regulated or moderated properly and it is encouraged individuals seek advice and recommendations regarding diagnosis, treatment and symptom management from a regulated healthcare professional such as a physician or nurse practitioner. Individuals should be cautioned about sites that make any of the following statements or claims that:
- The product or service promises a quick fix
- Sound too good to be true
- Are dramatic or sweeping and are not supported by reputable medical and scientific organizations.
- Use of terminology such as “research is currently underway” or “preliminary research results” which indicate there is no current research.
- The results or recommendations of product or treatment are based on a single or small number of case studies and has not been peer-reviewed by external experts
- Use of testimonials from celebrities or previous clients/patients that are anecdotal and not evidence-based
Always proceed with caution and with the advice of your medical team. |
Many industries and laboratories work with compressed gas cylinders. A typical compressed gas cylinder filled to a pressure of 2,400 pounds per square inch (PSI) will contain a volume of gas, that at atmospheric pressure would occupy nearly 300 cubic feet, compressed into a volume of almost 2 cubic feet. This represents a huge amount of potential energy which, if released suddenly, can have catastrophic consequences.
For the dispensation of gas, these cylinders have a valve in one of their ends to which a regulator is attached. This valve is the weakest area of the cylinder. Its rupture can essentially turn the cylinder into a missile that can cause serious property or bodily harm, and the feats of errant gas cylinders have a storied lore in science and industry that includes claims of such cylinders going through walls. Such a claim was examined by the folks at Myth Busters in the video below.
A real case of a compressed gas cylinder in an industrial setting which was handled in an unsafe manner can be seen in the video below. The cylinder toppled over breaking its valve, and the resulting explosive release of gas made it go airborne!
An additional hazard occurs if the release of gas takes place in an enclosed not very well ventilated space. A gas in the cylinder such as nitrogen can displace all the oxygen-containing air and produce asphyxia. To avoid these situations compressed gas cylinder handlers have follow specific safety protocols. |
There are more than 100 species of milkweed in North America, but only two are common along the Truckee River: Showy milkweed (Asclepias speciosa) and Narrow-leaved milkweed (Asclepias fascicularis). If you take a close look at a milkweed, you’ll find that each plant is home to a community of insects. Some, like honeybees (Apis mellifera)) stop by for the nectar and the pollen. Others, like monarch caterpillars (Danaus plexippus) feed on the leaves and stems of the milkweed. Spiders and predatory insects might come to the milkweed to feed on other insects.
I went down to Mayberry Park earlier this week, and the Showy milkweed was almost finished blooming, but the Narrow-leaved milkweed was still going strong. Both of our local species can be identified by their star-shaped blossoms and the white milky sap that appears if you break a leaf. They are easy to tell apart: Showy milkweed have pink flowers and large, oval-shaped leaves; Narrow-leaf milkweed have lighter-colored flowers, and leaves that are….narrow. All were covered in an a great variety of insects.
Many insects have special adaptations to allow them to feed on the toxic sap of the milkweed plant. Here are five common insects you might find on a milkweed:
1. Small Milkweed Bug (Lygaeus kalmii)
The Small Milkweed Bug (shown below in the company of aphids), has an oval-shaped body and back-markings that form a red “X”. These bugs feed on milkweed seeds. When there are no milkweed seeds around, they also feed on monarch chrysalises, caterpillars, and each other.
2. Oleander aphid (Aphis nerii)
Oleander aphids are tiny insects with yellow bodies and black legs. They are a non-native species from the Mediterranean, and reproduce asexually; every Oleander aphid you see in the wild is a female. These are very common on milkweed, and generally found in large groups.
3. Red Milkweed Beetle (Tetraopes tetraophthalmus)
Red Milkweed Beetles have red bodies with black spots, and long black antennae. They incorporate toxins from milkweed into their bodies, making them bad-tasting to predators.
4. Blue Milkweed Beetle (Chrysochus cobaltinus)
Blue Milkweed Beetles have green/blue iridescent bodies, and feed on the leaves of the milkweed plant. They are often seen in large numbers, and often are mating on the milkweed plant.
5. Monarch butterfly (Danaus plexippus)
Monarch butterflies lay eggs on the leaves of the milkweed plant, and caterpillars feed exclusively on milkweed. Caterpillars are beautiful — striped black, white and yellow – and live on the milkweed plant for 14-18 days before forming a bright green chrysalis. Ten days later, an orange-and-black butterfly emerges.
Along different parts of the river, milkweed plants can be at different stages, and the insects may be different as well. What do you see near you?Blue Milkweed Beetle and Oleander Aphids, Mayberry Park. June 30, 2015. |
Find the header below that describes the type of data skills you are looking for.
Scaffold the process of data analysis
Strategies and guidance to help students make their own decisions as they analyze and interpret data.
- Think your way through data analysis (One-page reference handout for students)
- Think your way through data analysis – Teacher Guide (Guide with more details for teachers or older students)
- Prompts for guiding students through data analysis (One page, for teachers, language for prompting students to think as they analyze data, adaptable for students)
Organize data in tables and spreadsheets
Support for learning how to organize data in tables and spreadsheets for different purposes.
- Organizing Student Data for Computer Visualization
- Organizing data for analysis: two approaches
- Graphing in spreadsheets vs. graphing in Tuva
Strategies for helping students frame clear questions that can be answered with data.
Core skill: Describe variability & distributions
Materials in this section can help students learn to think about qualities of groups instead of pointing to single data points, create and describe data in distributions (dot plots, box plots, and histograms), and reflect on what can be learned about populations or phenomena by looking at how they vary.
- Key ideas for statistical thinking (Instructional slides)
- Show and describe variability in a distribution (Instructional slides)
- Language for describing distributions (two-page downloadable handout in GDrive)
- Intro to Distributions: Old Faithful Lesson Plan (Lesson plan in GDrive, introduces dot plot)
- Scale an axis and make a dot plot (How-to slides, construct a dot plot by hand)
- From dot plot to box plot (How-to slides, construct a box plot starting with a dot plot)
- Anatomy of a box plot (Instructional slides explaining the different parts of a box plot)
Graph data to address a question or claim
- Graph Choice Chart (Maine Data Literacy Project)
- Graph Choice Chart Teaching Guide
- What kind of question? What kind of graph? (One-page practice worksheet)
Kinds of analysis
- Compare Groups Practice: Chromosome numbers in plants and animals
- Language for Comparing Groups
Communicate a data story
Assess data literacy skills
- Rubric for graphing and interpretation: A two-page PDF file with does not meet, partially meets, meets, and exceeds criteria for creating hand-drawn graphs and interpreting them. |
Cassowaries are genus of large animals from Australia, New Guinea, and Indonesia. While they look like birds, cassowaries also are incredibly large and also flightless. For many, that raises the question of just what exactly is a cassowary. Is a cassowary a bird, or something else? Let’s dig in.
Is a Cassowary a Bird?
First off, yes, a cassowary is a bird. It belongs to the biological class Aves, which includes all modern birds.
There are some key similarities between birds and mammals
- Both have hearts and are vertebrates
- In addition, birds and mammals share being warm-blooded
Yet, with birds having evolved from dinosaurs (and being their closest living relatives), there are some key traits that define birds.
First, birds contain wings. While most birds fly (more on that below), flying is not a requirement to be a bird. Cassowaries are flightless birds, which makes them rare, but they still contain wings.
Birds also are feathered. While the cassowary has a “fluffy” appearance, that’s not hair on their body but rather glossy feathers that are fairly unique across birds.
Birds also have beaks, a lightweight skeleton (which most species use for flying), and lay hard-shelled eggs. Cassowary eggs are unique, they’re a lime-colored green and very large in size. Cassowaries have beaks that connect to a large “casque” on their head that gives a unique appearance. Its proposed that this casque is used to help cassowaries cut through the tress and brush when running at fast speeds.
Add it up and while flightless, cassowaries share all the common traits among birds.
The Classification of Cassowaries
Cassowaries are connected to other large, flightless birds through the clade Palaeognathae which includes ostriches, tinamous, rhea, emus, and kiwis. Overall there are 60 species in this clade, while all other birds belong to the much larger Neognathae, which contains more than 10,000 species of birds.
The closest relative to the cassowary is the emu, which belongs to the same order. There are three species of cassowaries and just a single species of emu.
Its believed that cassowaries first evolved shortly after the extinction of the dinosaurs, about 60 million years ago. The cassowary shares incredibly similar appearance to a recently discovered dinosaur, which points to cassowaries being one of the closest remaining links to a time when dinosaurs roamed the Earth.
Are Cassowaries A Threatened Bird Species?
Of the 10,000 species in the world today, about 12% are currently threatened with extinction. As of 2021, all three cassowary species are listed as Least Concern.
That hasn’t always been the case as the dwarf cassowary was previously listed as Near Threatened by the IUCN until 2013, but its status has changed. Both the dwarf and northern cassowaries have relatively smaller geographic ranges that cover only a portion of the island New Guinea and some smaller surrounding islands. The southern cassowary also lives in Australia, through its range is limited to the Cape York peninsula in Queensland. |
Type 2 diabetes or t2d, is a medical condition due to metabolic disorder in which the body does’t use insulin properly. Insulin resistance occurs and type 2 diabetes progresses. We’ll go through type 2 diabetes treatments and management to keep you informed.
Type 2 Diabetes Overview
When glucose cannot be distributed correctly through the body, hyperglycemia occurs, leading to increased blood sugar levels. This also elevated glucose levels in the blood stream. Hyperglycemia is as a result of the following factors;
1. The body is unable to produce enough insulin to regulate increase in blood glucose levels.
2. The body is ineffective in insulin utilisation thus, leading to build up in blood glucose levels. This is also referred to as “insulin resistance “- a pioneering factor that leads to type 2 gestational and prediabetes.
The factors above can lead to complications and health problems such as:
- kidney damage: irreversible end stage kidney disease that may require dialysis or kidney failure.
- Nerve damage (Neuropathy): excess blood sugar can cause numbness and burning sensation experienced in finger/toe tips upwards signalling nerve damage/dysfunction.
- Vision Loss/Eye damage: such as glaucoma and cataracts which may destroy retinal blood vessels leading to potential blindness.
- Cardiovascular diseases: such as atherosclerosis (the narrowing of blood vessels), high blood pressure, stroke etc.
Type 2 diabetes, formerly known as Non-insulin dependent/Adult onset diabetes due to its occurrence in individuals over the ages of 40 to 45 years. Today more children are being stricken with this disorder than ever. The diagnosis of type 2 diabetes in children is due to the rise in child obesity rates. Studies have shown that more than 75 to 80% of children diagnosed with type 2 diabetes are related to someone already diagnosed with the disease, or because there are certain unhealthy eating traits and lifestyle habits that increases the risk of the condition. This can be prevented by:
- engaging in more fun physical activities
- consuming more fruits and vegetables
- drinking more water
- avoiding less sugary drinks
- preparing more healthy meals.
Signs and Symptoms of Type 2 Diabetes
The signs and symptoms of type 2 diabetes often develop slowly over a long period of time or for several years without ones knowledge. This is due to the fact that type 2 diabetes symptoms can be difficult to spot. However, knowing the risk factors/signs and symptoms to watch out for can help in it’s management, regulation and control. Some of these symptoms include;
1. Excessive or increased thirst: also known as “polydipsia” a term given to describe increased thirst for water usually noticed as an initial symptom of diabetes, as it occurs due to high blood sugar levels. Polydipsia is accompanied by temporary or prolonged mouth dryness.
2. Frequent urination: also known as “polyuria” which is a condition whereby the body release’s urine more than normal with large or abnormally excessive amounts of urine. Not all blood sugar can be reabsorbed and some of this excess blood glucose ultimately ends up in the urine where it draws more water from the kidneys resulting in unusually large urine amounts.
3. Unintended or sudden loss of weight and muscle mass:
Individuals with type 2 diabetes experience sudden insufficient insulin levels. This prevents the body from getting glucose from the bloodstream and into the cells to be utilized as fuel or energy. When this happens, the body looks for an alternative and will start to breakdown/burn up fat and muscle mass for energy. Leading to a sudden reduction in total body weight or muscle mass.
More T2D Symptoms
4. Extreme tiredness and body fatigue: This is as a resulting effect of hyperglycemia (high blood sugar levels) either due to insulin resistance or lack of insulin hormone which affects the body’s ability to channel glucose into cells to be utilized as fuel to meet the body’s energy needs.
5. Increased hunger: also known as “polyphagia” used to describe increased or excessive hunger or appetite. Polyphagia is one of type 2 diabetes three main symptoms characterized by lack of insulin hence, the body cannot convert or transform the food ingested into energy or body fuel and this lack of energy causes increased hunger levels.
5. Sores that heal slowly with areas of darkened skin (which is visible on the neck and underneath the armpits): this is an indicator of insulin resistance in t2d.
6. Frequent infections.
A number of factors can precipitate the risk of exposure or chances of developing type 2 diabetes. They include;
1. Physical inactivity » engaging in physical activities aids in weight control by using up stored glucose as a source of energy and this makes body cells more sensitive to insulin activity. Thus, the less active you are, the greater your risk of developing type 2 diabetes.
2. Age » as you get older, the risk of type 2 diabetes increases especially from age 40 to 45 due to the fact that as people age, they tend to gain weight, exercise less often and lose muscle mass.
3. Being Obese and overweight » is a major predisposing risk factor for developing type 2 diabetes.
4. Family History » having a first degree relative with type 2 diabetes diagnosis can be correlated with hereditary & environmental factors.
5. Waist size and fat distribution » fat stored mainly in the abdomen increase the risk of type 2 diabetes than if the fat is stored in other parts of the body such as the hips and thighs. Also the risk of type 2 diabetes rises in men with a waist size of 37 to 40 inches(101.6 centimeters) and a waist size of 31.5 to 35 inches(88.9 centimeters) in women.
6. Polycystic ovarian syndrome » which is a common condition in women characterized by- obesity, irregular menstruation and excessive hair growth increase the risk of developing type 1/ type 2 diabetes.
Treatment and Management of Type 2 Diabetes
Type 2 diabetes is a completely reversible and preventable health condition with chances of greatly reducing symptoms or reversing the disease condition especially when diagnosed on time.
The first step in the treatment and management of type 2 diabetes involves a typical combination of diet modification, with frequent appropriate physical activities and regular exercise.
1. Healthy Diet Modification:
This treatment involves adopting a low carbohydrate and calorie diet. Eating meals with lower carbohydrate and Low-Glycemic index can aid with weight loss and the eventual reduction/lowering of high blood glucose levels. A low calorie diet also comes handy as the less intake of calories results in lower build up of excess blood glucose.
Individuals diagnosed with type 2 diabetes will have to be extra cautious and careful with their carbohydrates content so that it does not equate to an uncontrollable rise in blood sugar levels. Some recommended diabetes super foods particularly rich in vitamins and other nutrients uniquely beneficial for individuals with type 2 diabetes include; lentils, cinnamon, white balsamic vinegar, chia seeds and wild salmon.
Medications, such as pills, injections and injectable drugs are recommended when diet and exercise alone are not sufficient to keep blood sugar levels in an acceptable range. Oral medications are often the first kind to be prescribed, some of these prescribed pills and their functions include;
- Metformin: helps the body to respond better to insulin.
- Meglitinides and Sulfonyl Ureas: this medications instructs the pancreas to produce more insulin.
- Alpha – glucosidase inhibitors: slows down digestion of food with complex carbohydrate composition (such as bread, rice, potato etc) thus, keeping blood sugar from surging after meals.
- SGLT2 inhibitors: increases the level of sugar excreted from the kidneys via urine.
Other Type 2 Diabetes Treatments Include
– Victo 39
– Bydereon etc.
3. Insulin therapy:
Taken with a device called an insulin pen or through an inhaler. Insulin therapy is recommended when other medication’s are not sufficient enough to control blood sugar levels.
4. Regular Exercise:
This includes all forms of physical activity from chores, working out to walking. Regular exercise helps in a culminative effect of lowering blood sugar levels by;
– helping cells to utilise insulin.
– enabling muscles to use up glucose.
It is advisable to check your blood sugar level before and after exercising.
5. Regular Blood Glucose Testing:
Involves checking your blood sugar with a glucometer on a regular basis. Consult with your doctor or healthcare professional about how often to test your blood sugar in order to know how well medication’s/treatments are working. Also vital, is knowing the target range your blood sugar should be maintained.
6. Weight Loss Surgery:
This treatment is not for everyone! Recommended for patients such as, men who are at least 100 pounds overweight, and women who are 80 pounds overweight. This would help get rid of the extra pounds, control blood sugar, raise the level of gut hormones called “secretins” that signals the pancreas to secrete more insulin. Thus leading to less intake of medication’s and effective treatment. #t2d #type2diabetes #t2dtreatments #t2dmanagement #t2dcure |
The chloroplast captures the sun’s energy and uses it to produces sugars which is used to power a cell as a solar power plant uses the sun’s energy to produce power for the city.
Which organelle would be compared to a power plant?
Explanation: The mitochondria is known as the “powerhouse of the cell” and is responsible for producing ATP for energy usage in the cell.
Which of the following organelles is analogous to a power plant?
One organelle in plant cells is the chloroplast, and the chloroplast is analogous to a power plant. A power plant, using raw starting materials, converts energy or matter from one form into a more usable form.
Which organelle is similar to an energy substation in a city?
The mitochondrion is the powerhouse of the cell. It is comparable to a power plant or a power station since it serves a similar function.
How is a city like a plant cell?
The Cell membrane controls what goes in and comes out of the cell just like a city border controls who enters and exits the city. The cytoskeleton protects the cell and maintains the cell’s structure just like the police department maintains order in the city and keeps the towns people in following the laws.
What energy do power plants use?
A power plant is an industrial facility that generates electricity from primary energy. Most power plants use one or more generators that convert mechanical energy into electrical energy in order to supply power to the electrical grid for society’s electrical needs.
Which of the following refers to the power plant of the cell?
One example is the mitochondrion — commonly known as the cell’s “power plant” — which is the organelle that holds and maintains the machinery involved in energy-producing chemical reactions (Figure 3).
What would lysosomes be in a city?
Lysosome. The lysosomes would be the recycling and waste disposal center in cell city. They have an important role in cells which is to digest things like worn out organelles, bacteria, and food.
What part of a city is like the lysosomes?
Lysosomes can be compared to the waste-disposal and recycling plant or hub in a city. Lysosomes are cell organelles functional in digesting cell debris, food and bacteria.
How is the mitochondria similar to a power plant?
Mitochondria are the power plants of the cells. They transform energy taken up through nourishment into a form the cells can use for a multitude of necessary metabolic reactions. In addition, mitochondria are responsible for triggering programmed cell death.
Which organelle in a cell could be compared to solar panels *?
An organelle found in plant and algae cells where photosynthesis occurs. Analogy: Chloroplast is like the solar panels on a house, because the solar panel use the sun to generate power for the house.
What is the main function of chloroplasts in a plant cell?
Chloroplasts are plant cell organelles that convert light energy into relatively stable chemical energy via the photosynthetic process. By doing so, they sustain life on Earth. … Chloroplasts are plant cell organelles that convert light energy into relatively stable chemical energy via the photosynthetic process.
What generates power for the city?
Like cities, cells are active, energetic beings. They too need a constant supply of energy and they produce it by converting fuel into useable cellular energy. The power stations of the cell are called mitochondria and the most common fuel that they consume is sugar (glucose).
What type of cell is cell city?
In many ways, the eukaryotic cell is kind of like a city. I will tell you what each of the organelles in a cell does. Your job will be to try to match each of the cell parts to the parts of a city and explain why they are similar. |
Is it Possible to Treat Dyscalculia?
- 21st January 2016
- Posted by: Phi Tuition
- Category: Educational Trends
What is Dyscalculia?
The condition of dyscalculia, or even the word, is not very well known. Dyscalculia is described as a specific learning disability for arithmetics, and the most common explanation would be “math dyslexia”. It is interesting how well known dyslexia is but not dyscalculia. This condition is suggested to affect nearly 5% of the population in general, however, the commonness of this condition in the UK seems to be at least 25%.
This specific learning disability, dyscalculia, causes difficulties in learning the most basic arithmetic facts as well as performing calculations, without having any relevance to the person’s age, educational level or daily and intellectual abilities and disabilities.
What are the Symptoms of Dyscalculia?
The symptoms of dyscalculia extend from forgetting mathematical processes to maximised levels of anxiety in applying maths.
However, this is not seen as an incurable learning disability. It is becoming clearer by day that most children with dyscalculia commonly haven’t learnt the basic arithmetic concepts completely and in a proper manner after all, which makes it more difficult for them to move on to more complicated problems, and hence never overcome the actual problem.
Is There Any Treatment for Dyscalculia?
With the light of this fact, efficient training modules for treating children with dyscalculia are becoming more apparent and powerful in training to defeat dyscalculia. With new, optimised and custom teaching methods, the children with dyscalculia can be trained to understand arithmetic concepts to any level.
Teaching the arithmetic concepts in short chunks of time by reminding them what they learnt in the previous lesson seems to be the key to start with. This way, they can feel that they are in control of their learning process and establish the building blocks. The teaching method should also include multi-sensory methods: they need to use all aspects of their sensory system elements at once. It is suggested that, as they are handling the numbers, if they are saying, hearing and writing them, they learn better and quicker.
As you see, overcoming dyscalculia isn’t a myth or a complicated medical procedure. Using correct and helpful teaching methodologies without making the lessons too long and distracting for them to follow can help children with dyscalculia. |
Emissions from the production of materials as a share of global GHGs are equivalent to the share of GHG emissions from agriculture, forestry and land use change combined.
The report identifies significant opportunities for reducing GHG emissions associated with residential buildings and passenger cars.
Reducing the GHG emissions in the creation of homes and cars could reduce the cumulative life cycle of carbon dioxide equivalent emissions by up to 25 Gt in the G7 countries.
The UN Environment Programme (UNEP) has released a report that finds addressing emissions from the production of materials, such as plastics, minerals, metals and woods, offers a critical opportunity to contribute to achieving the Paris Agreement on climate change. The report identifies significant opportunities for reducing greenhouse gas (GHG) emissions associated with residential buildings and passenger cars.
UNEP’s International Resource Panel (IRP) produced the report titled, ‘Resource Efficiency and Climate Change: Material Efficiency Strategies for a Low-Carbon Future.’ It argues that policymakers should pay increased attention to material efficiency because emissions from the production of materials as a share of global GHGs are equivalent to the share of GHG emissions from agriculture, forestry and land use (AFLOU) change combined. Despite these emission levels, materials production has “received much less attention,” even though technologies to increase material efficiency are already available.
The report states that approximately 80% of emissions from materials production come from material use in construction and manufactured goods. The report focuses on the potential to reduce GHG emissions from the creation of homes and cars, the two most carbon-intensive products in the construction and manufacturing industry. GHG emissions reductions for homes and cars could reduce the cumulative life cycle of carbon dioxide equivalent emissions by up to 25 Gt in the Group of 7 (G7) countries.
In the G7 countries, for example, material efficiency strategies, including using recycled materials, could reduce GHG emissions in the material cycle of residential buildings by 80-100% in 2050. Meanwhile, material efficiency strategies could reduce GHG emissions from passenger cars by 57-70% in this same subset of countries.
“Paying greater attention to circularity, sustainable consumption and production and resource efficiency can radically improve our ability to meet the Paris Agreement goals,” said the UNEP Executive Director.
The report highlights a number of strategies for reducing emissions, from designing buildings that use sustainably harvested timber and less material to improved recycling of construction material. Additional strategies include using less carbon-intensive steel, cement and glass in building homes, and opportunities for looking at the whole building life cycle. In total, the report states that using these strategies could contribute 5-7 Gt of carbon dioxide equivalent emissions in the period 2016-2050 in G7 countries. UNEP Executive Director Inger Andersen explained that strategies to address climate change have primarily focused on accelerating renewable energy use and improving energy efficiency but stressed that “paying greater attention to circularity, sustainable consumption and production (SCP) and resource efficiency can radically improve our ability to meet the Paris Agreement goals.”
The report further recommends policy interventions to promote material efficiency benefits. Building codes and standards, for instance, could “encourage or constrain material efficiency.” The report therefore recommends cross-sectoral policies that revise building standards and codes, charge vehicle registration and congestion fees, and green public procurement and virgin material taxation, among other revisions. The report suggests evaluating policies on a life cycle basis to identify synergies and trade-offs across life cycle stages and industrial sectors.
The report further recommends policymakers integrate material efficiency into their nationally determined contributions (NDCs) to set higher emission targets. To date, the report finds that only China, India, Japan and Turkey identify circular economy, material efficiency, resource efficiency or consumption-side instruments as mitigation measures in their NDCs. [UNEP Press Release] [Report Landing Page] [Publication: Resource Efficiency and Climate Change: Material Efficiency Strategies for a Low-Carbon Future] |
|Introduction. Russian letters and sound system|
Sounds, handwriting, keyboard
|Introductory Lesson 1|
Reading syllables. Translating This is..., Here is...
|Introductory Lesson 2|
Reading syllables. Conjunctions и and а
|Introductory Lesson 3|
Learn Russian hushing and velar sounds. Stress and vowel reduction (а, о)
Check what you have learned from Lessons 1-3 with this 10 minute quiz.
|Introductory Lesson 4|
|Introductory Lesson 5|
Learn Russian 7-letter spelling rule
|Introductory Lesson 6|
Learn Russian Hard and Soft сonsonants. Vowel reduction (я, е)
Check what you have learned from Lessons 4-6 with this 15 minute quiz.
|Introductory Lesson 7|
Letters ь and ъ. Pronunciation of я, ё, ю, е. Letters к, г, х
|Introductory Lesson 8|
Unpaired hard and soft consonants. The soft consonant й
Check what you have learned from Lessons 7-8 with this 15 minute quiz.
|Introductory Lesson 9|
Voiced and Voiceless Consonants. Devoicing of Final Consonants. Consonant Clusters
|Introductory Lesson 10|
Pronunciation of г, ч, тся and ться
|Phrasebook Topic 1|
Learn how to greet people and say goodbye in Russian
|Phrasebook Topic 2|
Introducing Yourself in Russian
Check what you have learned from Lessons 7-8 with this 10 minute quiz. |
What is a Rule?
A rule is a function or action that enables dynamic form content. An example would be a barcode rule, where a unique barcode is created for each form based on a number that changes like an order number or sales order number.
If not contained within a parent element like a table cell, both constants and rules can be clicked and dragged to any location in the form, independent of the flow of the document.
Adding a Rule: 1D Barcode
1. Right Click –> Add Rule –> 1D Barcode.
2. Name the Rule, select the variable to base the barcode on.
3. Select Barcode color, size and type.
4. The result is a barcode that’s value is linked to a variable and will change as the variable changes. |
Value with Line – Hatching & Cross-Hatching Lesson Plan
By Michelle C. East 2016 (All Rights reserved)
Lesson Plan & Worksheet: Practical, hands on Hatching and Cross-Hatching lesson for 4th grade through Adult. (3 Page PDF)
Includes lesson objectives, delivery, project suggestions, and worksheet.
- Lesson covers:
- Purposes for lines
- Types of Lines (Line Family)-
- Value: Shadows and Highlights
- Creating Values through Lines: Hatching and Cross-hatching
- Practice hatching & cross-hatching the forms (following contour of forms)
Additional Activity: Students draw geometric forms from direct observation, then apply value with hatching and cross-hatching lines.
Project Suggestions: Chex Mix Cross-Hatching Drawing and Still Life
© Michelle C. East Create Art with ME 2016 – Reproducible for CLASSROOM use only. |
Sinusitis is a common infection therefore everyone should know at least how to get a basic sinus infection treatment at home before you can reach a doctor. This knowledge could be useful in many situations, including the ones when you are away from home, on an expedition or somewhere else where you cannot get medical help. There are a few basic stuff you should keep in mind when dealing with such an ailment. First of all, let us get a clear view over the sinuses.
A sinus is a cavity filled with air. When saying sinus we refer to those cavities in the skull that are connected to the nasal passages through a narrow hole in the bone marrow (ostium).
People have four sinuses separated in 4 pairs:
- frontal sinus (situated on the forehead)
- maxillary sinus (that can be found under the cheeks)
- ethmoid sinus (that is situated between the eyes).
- sphenoid sinus (that is situated deep behind the ethmoid)
General information on sinusitis
Sinusitis is the infection of the mucous membrane which is lining the nasal passages and sinuses. When the mucous membrane becomes inflamed it swells, blocking the drainage of the fluid from the nose and sinus towards the throat. This leads to sinus aches and a pressure feeling in that area. If the sinuses cannot drain completely it is possible that bacteria and fungi will develop.
The sinuses are blocked during viral infections like a cold, causing sinus inflammation and infection as a result. The difference between colds and sinusitis is that the symptoms which come with the cold, including runny nose, start to improve after 5-7 days. Symptoms attributed to sinusitis last longer and they get worse after 7 days.
We are facing two forms of sinus infection: acute (it appears suddenly) and chronic (it is the result of an acute sinusitis that was not treated properly and it keeps appearing again and again). In chronic sinusitis symptoms do not disappear completely, always keeping the mild symptoms.
Possible causes of sinus infection
Sinus infections usually occur after the body faces an infection with a virus. This leads to the inflammation of the mucous membrane from inside the nose passages.
- Mucous membrane swells when it becomes inflamed, blocking the fluid drainage from the sinuses to the nose or throat
- Mucus and fluid buildup in the sinuses causing pressure and pain
- Environmental bacteria have a foothold in the sinuses that don’t drain properly. Bacterial infection of the sinuses often causes more inflammation than pain.
While colds trigger this condition, any factor that causes inflammation of the mucous membrane can cause sinusitis. Many people with allergic rhinitis (nasal allergies) probably show chronic sinusitis with repeated episodes of acute sinusitis. Nasal polyps, foreign bodies (frequently in children), structural disorders of the nose, deviated septum as well as other diseases can obstruct the nasal passage increasing the risk of developing sinusitis.
Fungal infections can also cause sinusitis. They are more common in children with deficient immune system. Fungal sinusitis tends to become chronic and is more difficult to treat than the bacterial type.
Sinus infection types
Sinusitis can be classified in various ways, depending on how long the problem lasts (acute, sub acute, chronic) and either the type of inflammation is infectious or noninfectious.
- Acute sinusitis is defined as having a duration of less than 30 days.
- Sub acute sinusitis lasts a period from one to three months .
- Chronic sinusitis is defined as having a duration longer than 3 months.
The periods mentioned above are not medically decided and they should be considered more as general information. Usually, the sinuses infection is triggered by common viruses and bacteria (less common) and rarest, fungi. Sub acute and chronic sinusitis is most likely the result of improper or incomplete treatment of the acute form. Sinusitis that is considered to be noninfectious is usually caused by allergies or allergy triggers (irritants) and they may last as long as the acute, sub acute or chronic infections.
The most common symptoms of sinusitis are pain and pressure in the face with the feeling of stuffy nose, filled with secretions. You may see yellow or greenish nasal discharge. Bending or moving your head can often increase facial pain and pressure.
The pain or sensitivity may vary depending on the sinus that is affected:
- Cheeks or incisors’ pain may be caused by inflammation of the maxillary sinus
- Pain in the forehead, above the eyebrows, can be caused by inflammation of the frontal sinus
- Retro ocular pain (behind the eyes), in the head or in both temples can be caused by inflammation of the sphenoid sinus
- Peri orbital pain is determined by the inflammation of the ethmoid sinus.
Other common symptoms of sinusitis include:
- Yellowish or greenish discharge from the nose leaking from the back of the throat
- Bad breath
- Productive cough
- Diminished sensitivity of taste or smell
See our piece on how to treat cough with natural remedies for more information.
How to treat sinusitis
There are two ways you can treat a sinus infection: by home remedies or using allopathic treatment. It is better to start with the first one but you should carefully watch the symptoms. If they get worse or if they last for days, you should consult a doctor.
Generally speaking, common sinusitis can be treated with home remedies. However, if the pain persists for more than a week or in case of repeated sinusitis (more than three per year) go to see your doctor. There are severe cases when sinus infection can spread to the eye or the meninges, leading to serious complications. Call 911 if eye paralysis occurs, if your eyes are very irritated or if you have nausea or if you start vomiting.
Remedies for treating sinusitis on an expedition
Sinus infections may catch you totally unprepared, on an expedition, far away from home or any doctor or people that can provide medical care to you. In this case, you should know that there are few natural remedies you may carry out with you. They are small and light and you may stuff them in one of your backpack’s pockets.
If you face a sinus infection on your trip, you can use either essential oils or homeopathic remedies to relief your pain. Nevertheless if symptoms start to occur you should hurry back home. Also, make sure you have a complete first aid kit with you to stay prepared for anything.
There are a lot of essential oils that can be helpful with sinusitis: Lavender, Menthol, Eucalyptus, Oregano, Tea Tree, Peppermint, Rosemary, Thyme, Geranium, Clove, Sweet Basil, Pine, Chamomile. It is true that you can’t use an inhaler or a humidifier or even take a relaxing bath while out there, but you may use the essential oils in other ways:
- Place a few drops on a clean cloth and inhale. This will help release the congestion from your nose.
- Dilute the oils in some carrier oil (olive, coconut oil or whatever is available – take a small quantity in your bags, about 2 oz, you don’t need very much oil) and use this mixture to massage the pressure points of your face: the temples, across the forehead, the side of each nostril, the inside of the eyebrows. Take care to avoid the eyes area.
- Ingestion of essential oils is a matter of debate amongst the alternative medicine practitioners. The opinions are divided: there are some voices that say essential oils should never be ingested while others recommend great care with them, especially for children, elder or pregnant women. So, if you plan on using the essential oils internally you should seek advice from a specialist.
Homeopathic treatment can help you as well. The remedies are also small and have a long shelf life as they do not deteriorate so you may carry them with you. Depending on the kind of symptoms you develop, you may use:
- Belladonna – you feel like your eyes are heavy and the pain is located in the forehead
- Kali Bichronicum – pain develops at the root of the nose and you have a thick nasal discharge
- Pulsatilla – the pain in your head decreases when standing up or getting fresh air
- Arsenicum – burning pain in the sinuses that is getting worse with high noise, light or movement
- Mercurius – great pain in the head, aggravated by open air, sleeping or eating
- Hepar sulfur – effective with sinusitis developed from cold air
- Spigella – high pain on the left side of the face
Dosage: Use 6th or 30th potency and take a dose every 2 hours in the beginning, when the symptoms occur and they are intense and every 4 hours when you get better.
Home remedies for sinus infection
The benefits of hot steam
- Hot vapors help relieve painful pressure in the sinuses. Take a shower as hot as you can, inhaling the steam and leaving water to run onto your face until you free your sinuses.
- Lie on the bed and apply a compress with hot water on the sinus area. Use a thick towel, soak it in water then squeeze it.
- Take a steam bath adding aromatic pine, peppermint or eucalyptus essential oil. Pour 1-2 liters of boiling water into a medium bowl, add 2-3 drops of one of the essential oils mentioned above then take a steam bath of the face, covering your head with a towel. The steam favors sinus drainage.
Saline solution – an effective method for thinning mucus and reducing inflammation of the sinus mucosa. It consists of irrigation with a saline solution which can be homemade. Mix a quarter teaspoon of salt with a pinch of baking soda in a cup of warm water. Fill a pipette with this liquid, tip your head back and put the solution into one nostril and obstruct the other one. Inhale deeply then blow your nose gently. Repeat the procedure with other nostril.
Neti pot – it is a small ayurvedic container which is found in some pharmacies or online. It resembles a small splash with a goatee. Fill it with saline solution. Place the neti pot in a nostril and bend the head to one side. The warm solution will flow into the nose through one nostril and get out on the other, eliminating mucus. Blow your nose then repeat the procedure.
Horseradish – grate horseradish. Inhale the smell. This vegetable with a strong flavor contains a substance that may make the mucus thinner. Avoid contact with eyes.
Horseradish and lemon juice – mix the two ingredients in equal amounts and take a teaspoon an hour before you eat. You may start crying.
Piper – if you like spicy foods, add a pinch of pepper or chili to your food. They contain capsaicin, a substance which promotes mucus drainage.
Garlic – recent studies have shown that allicin contained in garlic has antibacterial properties. Mix a crushed garlic clove with four teaspoons of water. Then pour 10 drops in each nostril twice a day. The infection should be gone in about three days.
Sleep and relaxation
- Place a pillow under your neck when you lay down. The position of the head at the same level with the shoulders may clog sinuses.
- Lie down in a comfortable position. Apply a cushion with heated salt over the eyes and cheekbones. Leave it until it cools then continue the procedure until you feel a relief in the sinuses.
A minimum of sinus massage will stimulate blood circulation, helping to relieve the pain. Start by pressing the bottom of the nostrils with your fingertips then take the pressure to the nose, under the brow. Press each point for 30 seconds then release it.
Rosemary oil enhances the effect of the massage. Fill a bowl with hot water, pour a few drops of essential oil and inhale the vapors while performing acupressure.
The medical treatment for sinus infection includes 4-5 types of medicine:
Antibiotics – in case of a bacterial infection. Antibiotics are not effective with viral infections, they may even cause more harm. The statistics show that there are very few cases of bacteria sinusitis. This points to the fact that the antibiotics may be overused.
Painkillers – those may include ibuprofen or acetaminophen. Don’t use them for more than 10 days.
Decongestants – they may get the form of pills (Contac or Sudafed) or sprays (Afrin or Dristan)
Allergy medicines – if the sinusitis occurs due to some allergy you doctor may prescribe antihistamines
Steroids – they may reduce the swelling in the membranes. In acute cases or sinusitis that repeats over and over you may even consider surgery so the nasal passages will be enlarged and drain better.
You should only take allopathic treatment when prescribed by a doctor. Do not use medicine by yourself with no previous medical advice. Check out our tips on natural remedies for allergies to help you, as well.
Better to prevent than to cure
Install an air humidifier in the room where you sleep. Let it on during the night so the nasal passages don’t dry. Clean it every week to prevent storing fungi.
Limit your intake of alcoholic beverages. Alcohol causes an inflammation of the mucosa and of the sinuses. Also avoid bathing in pools with chlorinated water! Do not ever jump with your head forward. Chlorine irritates the nasal mucous membranes. The pressure exerted by the jump makes the water reach the sinuses.
Avoid smoking rooms. Cigarette smoke dries the airways and favors the proliferation of bacteria in the sinuses. Limit consumption of milk and dairy products which favor the formation of mucus. If you suffer from sinusitis systematically, contact a dentist. This suffering is often caused by dental problems.
Sinus and altitude
All circumstances related to atmospheric pressure fluctuations are dangerous to the sinuses. They may lead to an aggressive form of sinusitis, called barotraumatic which is manifested by intense and brutal pain, sometimes accompanied by bleeding. It may occur in the case of a sudden take off of a plane or a higher pace mountain climbing, beyond 2000 meters altitude.
The risk is even greater in the case of an aquatic plunge with oxygen tank. When returning to the surface the air pressure accumulated in the sinuses necessarily has to go outside to avoid a serious damage.
Traveling by plane or reaching high altitudes is not recommended in the periods of acute sinusitis.
While sinus infections may lead to some complications, if you pay attention to the symptoms and start the treatment with home remedies on an early stage, you may get rid of this ailment in no time. They are quite effective and have been tested throughout the years.
But pay attention to the way your body feels: if your condition gets worse and nothing seems to relieve your pain seek for medical care. Your doctor will certainly know what to do. |
You must be logged-in in order to download this resource. If you do not have an AOE account, create one now. If you already have an account, please login.Login Create Account
Great! you're all signed in. Click to download your resource.Download
The arts can be a powerful tool to teach students about cultures, environments, and other groups of people they might not encounter in their life. Studying artists’ work can help expose students to new ideas to consider the world around them. Like the media, art can tell stories and provide a new perspective.
Creating art with this in mind can also be transformative as students form and express their ideas about the world. Here are three projects to help students reflect on themselves and think about others.
This project aims to help students think about their environment and how they can make it better. Students will spend time identifying things that matter to them and how they can contribute to others in positive ways. This project can reinforce how students are a part of their environment along with others. You can pick which mediums and styles work best for your class and embed genres like landscapes or surrealism. Artwork, like Chris Jordan’s Intolerable Beauty series, and murals by JT Daniels are great resources to show students how work can reflect an environment and community.
Students should use mind-mapping or free-writing to reflect on their current world, country, state, or school, and the local community. Here are a few guiding questions you can modify based on your students:
Have students identify two to three aspects that are most important to them and articulate how they want to see those areas change to better others. Once students have identified what they’d like to see changed, have them create a plan to contribute to that mission. Students can take all of these ideas and sketch their visual responses according to your other project requirements. Used saved reflections to create a strong artist statement accompanying the finished piece.
Running for any office puts the candidate in the position of identifying their own values and strengths and how those could be used to lead and help their constituents. Having students design their own campaign imagery allows them to build self-awareness through reflection and connect to what the people around them need. Students can also think of themselves as a leader and change agent. Make curricular connections to symbolism, positive/negative space, and design concepts like typography and branding. You can use prior campaign graphics as resources for students to see common approaches and how designs have evolved.
Researching what people need is one way for the students to determine their platform. You can start by having students identify what position they’re running for, like President of the United States or Student Council President. Then, have them identify the following:
Separately from the qualities, have students brainstorm each of the various places to incorporate campaign designs. They can design one or a series of designs, including a yard sign, logo, billboard, social media ad, t-shirt, etc.
Once students have identified how they want to be represented, they can start creating their designs. Digital tools and apps, like the Adobe Suite, are great, but there are other options like drawing tools and cut paper. Students can compile their series of designs into one format, like a slideshow or a poster, to complete the experience. In place of an artist’s statement, have students present their project to the class—as a campaign speech.
Learning another person’s story can be transformative—so can telling that story through art. In this project, students identify a key person(s) to help tell their story through art. This project helps students build a connection, strengthen a relationship, and learn about someone else’s life. Hearing how someone else navigates the world or a situation will help students understand that other people have different experiences from their own. Being aware of other life experiences builds empathy for others. This project will work with various subject matters and styles, but portraiture is the obvious choice. Artwork by Rae Senarighi and Stacy Pearsall’s Veteran’s Portrait Project are great examples of telling stories that engage the viewer.
While artistic style grabs the viewers’ attention, the subject matter should be memorable and meaningful. Helping students identify the right person(s) to inspire their work will take time and needs to be well-thought-out. The project isn’t just about creating a portrait; it’s meant to create a meaningful experience. Try these guiding questions to help push student thinking:
Once students have identified their subject, help them craft questions for an interview to learn more about their person. Students can opt for a standard question and answer or ask their person to share their most meaningful story. The idea is for students to think about how a story can be told through their image. Depending on how you want to approach the project, students can also collect a reference image by photographing their subject or asking for an image. While all of this planning and preparation time might feel excessive, it can help teach students steps like brainstorming and idea development. Great subject matter can help make great art. Finally, interviews or recordings can accompany the final display to further the connection to the subject.
It’s not your responsibility to tell students what to think, but it is a teacher’s responsibility to help students think—and, at times, challenge their thinking. Creating artwork forcing students to reflect on themselves, their community, environment, and those around them is a meaningful way to develop their own thoughts and opinions. Showing students how art can tell stories and be informative reinforces the value of art and its powerful impact on viewers. Students might need help thinking beyond the obvious, but that’s why you’re there, to push them with support.
What other projects can help students learn about other groups of people?
What artists are creating work that speaks directly to young people?
The NOW Conference is the world’s largest online conference for art educators! This one-day event (join us live or watch on-demand for an entire year) features 20+ inspiring and innovative TED TALK-style presentations covering topics that are relevant right NOW in Art Ed! |
Measurements of the Earth’s magnetosphere promise better space weather forecasts
(10 August 2018 - Kanazawa University) Earth is constantly being hammered by charged particles emitted by the Sun that have enough power to make life on Earth almost impossible.
We survive because Earth's magnetic field traps and deflects these particles, preventing the vast majority of them from ever reaching the planet's surface. The trapped particles bounce back and forth between the North and South poles in complex, ever-changing patterns that are also influenced by equally intricate and shifting electric fields. We get to enjoy the sight of those particles when the bands they move in (the Van Allen radiation belts) dip into our atmosphere near the poles creating the Northern (and Southern) lights. However, bursts of these particles can damage satellites and sensitive equipment on the ground.
Bursts of electromagnetic waves named "chorus" captured by the Arase/PWE in the inner magnetosphere. (courtesy: Kanazawa University)
It is therefore vital to understand the intricacies of the radiation belts. So far, NASA have launched twin satellites to study the Van Allen belts—however, their orbits only allow them to explore the equatorial regions. This limits our ability to understand flow of particles and prevents us from predicting their effects on all satellites.
To also explore regions further from the equator, the Institute of Space and Astronautical Science, a division of the Japan Aerospace Exploration Agency, launched the Arase satellite in 2016. A Japan-based research team centered at Kanazawa University equipped the Arase satellite with multiple different sensors (termed the Plasma Wave Experiment) to probe the electric field and plasma waves in the Earth's inner magnetosphere. Now, they have collected their first set of data from their sensors, which they recently published in the Springer journal Earth, Planets and Space.
The Arase consists primarily of electric and magnetic field detectors covering a wide frequency range; it can also measure plasma/particles in a wide energy range. To improve efficiency, an on-board computer studies the correlations between the fields and the particles before sending only the most important information back to Earth.
"The Plasma Wave Experiment equipment has passed initial checks and has successfully acquired high quality data. Huge amount of burst waveform data has been taken, and we should soon know a lot more about mechanisms of wave-particle interaction occurring in the inner magnetosphere than before. Another strength of our project is that we can also compare the satellite data with data collected simultaneously on the ground. We expect those comparisons will greatly broaden our understanding of this area of science," first author Yoshiya Kasahara says.
Understanding how electrons and other particles are hurled out of the magnetosphere onto our planet could be key to predicting such bursts and protecting against them.
The Plasma Wave Experiment (PWE) on board the Arase (ERG) satellite
Journal: Earth, Planets and Space
Authors: Yoshiya Kasahara, Yasumasa Kasaba, Hirotsugu Kojima, Satoshi Yagitani, Keigo Ishisaka, Atsushi Kumamoto, Fuminori Tsuchiya, Mitsunori Ozaki, Shoya Matsuda, Tomohiko Imachi, Yoshizumi Miyoshi, Mitsuru Hikishima, Yuto Katoh, Mamoru Ota, Masafumi Shoji, Ayako Matsuoka and Iku Shinohara |
Shofar is a wind musical instrument usually made from a horn of a ram or a goat, but could also be from Kudu or Oryx horns. In Semitic languages, the word shofar and the name of the mountain ram are single-root words.
The mention of the sounds of the shofar was first encountered when describing the Sinai revelation (Exodus 19:16). The sounds of the shofar should have announced the coming of the jubilee year (Lev. 25: 9, 10). The shofar is an indispensable attribute of the celebration of Rosh Hashanah; this celebration in the Torah is called the name of the trumpet (`trumpet sounds day`; Num. 29: 1). Apparently, in the biblical era, it was customary to combine the sounds of the shofar with playing other musical instruments — trumpets, flutes, etc. (Ps. 95: 6). During mass processions, they were trumpeting into horns to convene citizens. Sometimes the shofar announced the start of hostilities or an impending disaster.
During the Second Temple period, the shofar was only blown on Rosh Hashanah and some fasting days. The shofar during this period acquired a predominantly ritual character and became part of the temple ritual. During the Temple period, according to the testimony of the Mishnah, Jews blew into shofar on Rosh Hashanah only in the Temple. Perhaps, in the days of public fasts, trumpets were also blown outside the Temple. After the destruction of the Temple, Rosh Hashanah and Yom Kippur began to blow their horn in the synagogues during prayers.
According to traditional interpretations, the sounds of shofars on the day of Rosh Hashanah reinforce the solemnity, induce worshipers to repentance, and, according to popular belief, they should confuse Satan, who on this day of the court is the accuser.
In the Middle Ages, it became customary to trumpet the shofar at the end of the morning service throughout the Hebrew month of Elul. In the Talmudic era, the shofar was trumpeted on the eve and at the end of holidays and Saturdays to alert the people. This custom has been preserved only in the ceremony of the Exodus of Yom Kippur, during which those present wishes each other the following year to meet in Jerusalem. The combination of trumpet sounds with the Messianic desire gave the shofar a new symbolic meaning.
In 1948, when Jews in Yom Kippur trumpeted at the shofar at the Western Wall, the Arabs considered this a political act and protested. The shofar was also blown during the battles for the Temple Mount in 1967 during the Six-Day War. In the State of Israel, it is customary to trumpet into the shofar in various ceremonies, including secular ones. Thus, they trumpet into shofar when the new president of Israel takes office. Sometimes the shofar is trumped during mass demonstrations, especially those of religious Jews.
In various communities, the shofars differ from each other. The Ashkenazi shofar is treated and polished outside and inside, and it is crescent-shaped. Sephardic shofars are long and twisted. Artisans who pass the tradition from generation to generation are engaged in the manufacture of shofars.
Already in the era of the Second Temple, the shofar was part of the national symbolism. Images of the shofar can be found in the mosaic decorations of the ancient synagogues. |
How does rock art help our understanding of California pre-contact aboriginal groups?
What are the types of rock art, and what are their defining features?
Why should Native American rock art sites be protected?
For more than 40,000 years, humans have been inspired to create what archeologists call “rock art”: paintings and engravings on natural stone surfaces. The photographs in this collection depict rock art sites throughout central and southern California.
The aboriginal peoples who created this artwork have a very long history in the region. In North America, specifically in California and the surrounding region, archaeological evidence suggests that settlement began in the Paleoindian period (approximately 12,000-8,000 YBP, or “years before present”), with the greatest use of the area occurring 2,600 to 1,000 YBP.
It is difficult to know when the sites in these particular images were created, because currently there are few techniques for dating rock art. Generally speaking, rock paintings in unprotected environments (like boulder faces) are thought to be less than 500 years old because they are exposed to wind and rain. But carved designs in hard rock like granite are durable and can last for thousands of years in a stable environment.
Native American rock art provides us with many clues in our search for the history of California’s aboriginal peoples. The painted or carved symbols are not writing, as we know it, but they were created to convey information. Their meanings are not always known, and are sometimes debated among scholars.
Rock art can range from simple scratch marks to elaborate motifs. Some common abstract design elements include circles, concentric circles, spirals, dots, and meandering lines. Representative designs include human forms and animals. Human forms with horns or radiating wavy lines and animals such as the lizard and rattlesnake are thought to be symbols of power or spirit helpers. This image is thought to be a representation of a medicine bag, and this one clearly resembles a human hand.
Ethnographic information has suggested that rock art served several functions. Some rock art styles could be boundary markers for clan territories or guides to water and food resources. Others, like this example, may record historical events such as astronomical phenomena or the arrival of Spanish explorers. It has been determined that some rock paintings were made as part of girls’ puberty ceremonies; designs most commonly thought to be related include red diamond chains and chevrons. Some rock art motifs resemble the visionary imagery of trance states, which Native American shamans entered to communicate with the spirit world.
Rock art is characterized by how it was created and the materials that comprise it. Major types include the following.
Petroglyphs are rock engravings produced by pecking (striking the surface of the rock with a harder small stone), chiseling (using a hammer stone to pound a sharp stone chisel), abraiding (rubbing or battering the rock surface with a rounded stone), or scratching (using a very sharp edged stone to scratch finer lines).
“Desert varnish,” a reddish brown or black mineral patina that forms on rock surfaces in the desert, creates a perfect medium for petroglyphs. Designs such as this one were etched through the patina to reveal the lighter rock beneath. Desert varnish forms very slowly—1,000 years for a heavy varnish—and becomes thicker and darker with increasing age.
Cupules, a type of petroglyph, are small, cup-shaped indentations. They are pecked or ground into the stone and appear randomly or sometimes in lines or groups.
Pictographs are designs painted on rock surfaces with paints made from natural mineral pigments mixed with animal fats, egg whites, plant oils, or urine to bind them. Hematite (iron oxide) was used for red paints, limonite for yellow, kaolin clays for white, and charcoal for black. Paints were applied with brushes made from yucca fiber or applied with fingers and hands. The most common color in southern California is red.
Geoglyphs, also known as intaglios, are giant drawings on the ground that depict geometric designs, human forms, and animal figures. These large designs were created by clearing the dark rock of desert pavements and exposing the lighter colored rock and sand below. Many of these figures were made over the last 2,000 years. More than 200 geoglyphs have been discovered along the Colorado River, and many others have been reported around the world.
Natural elements such as rain, wind, heat, and cold have had their effect on Native American rock art. Pigments have worn away and mineralization has re-patinated surfaces, rendering the images ever fainter. Heat and cold have caused crumbling and exfoliation while falling rocks have broken away designs from their original locations. Plants taking root in cracks, lichen growth, and mosses covering surfaces eventually break down rock and its images. This image shows a rock—and, as a result, a design—that has broken apart due to natural deterioration. Nevertheless, it is remarkable how well many of the symbols and designs made thousands of years ago have survived.
While the effects of nature on rock art images are sad to see and result in great cultural loss, nothing is as devastating as the damage done by humans. This destruction takes many forms; sometimes it is inadvertent but often it is intentional. Many images have survived wind, rain, and storms for thousands of years only to fall victim to vandalism. For example, graffiti and bullet holes obfuscate and can potentially destroy a design. Because they are located on large swaths of ground, geoglyphs have also been negatively impacted by illegal motorcycles, SUVs, and off-highway vehicle activities.California Content Standards
1.0 Writing Strategies: Research and Technology
2.0 Writing Applications
2.3 Write information reports.
2.0 Speaking Applications
2.2 Make informational presentations.
2.0 Writing Applications
2.3 Write research reports about important ideas, issues, or events.
2.0 Speaking Applications
2.2 Deliver informative presentations about an important idea, issue, or event.
3.2 Students describe the American Indian nations in their local region long ago and in the recent past.
4.2 Students describe the social, political, cultural, and economic life and interactions among people of California from the pre-Columbian societies to the Spanish mission and Mexican rancho periods.
5.1 Students describe the major pre-Columbian settlements, including the cliff dwellers and pueblo people of the desert Southwest, the American Indians of the Pacific Northwest, the nomadic nations of the Great Plains, and the woodland peoples east of the Mississippi River.
6.1 Students describe what is known through archaeological studies of the early physical and cultural development of humankind from the Paleolithic era to the agricultural revolution.
3.0 Historical and Cultural Context Understanding the Historical Contributions and Cultural Dimensions of the Visual Arts. Students analyze the role and development of the visual arts in past and present cultures throughout the world, noting human diversity as it relates to the visual arts and artists.
YBP means “Years Before Present.” Scientists use this time scale to refer to events that happened in the past. This age scale is based on the reliable and consistent use of radiocarbon dating in the 1950s. Since time is constantly changing, YBP is counted from the year 1950.
Ethnography is the study and systematic recording of human cultures. Much of what we know about rock art has come from listening to the people whose ancestors created it.
A shaman is a person believed to communicate with the spirit world in order to divine future events or heal the sick. |
In today’s lesson, we will show a straightforward way of proving the Angle Bisector Theorem.
The angle bisector is a line that divides an angle into two equal halves, each with the same angle measure.
The angle bisector theorem state that in a triangle, the angle bisector partitions the opposite side of the triangle into two segments, with a ratio that is the same as the ratio between the two sides forming the angle it bisects:
If ∠BAD≅ ∠CAD, then |BD|/|DC|=|AB|/|AC|
This is another useful tool in problems that require you to compare lengths of different line segments. [The others being similar triangles, triangles with the same height or base, or the intercept theorem].
AD is the angle bisector of angle ∠BAC in triangle △ABC (∠BAD≅ ∠CAD). Show that |BD|/|DC|=|AB|/|AC|
Many proofs of this theorem use trigonometry and the law of sines. But here, we will provide a proof that does not rely on such advanced knowledge.
Instead, we will use one of the other tools already in our pocket for comparing ratios of line segments- triangles with the same height.
Let’s look at the two triangles formed by the angle bisector, △ABD and △ADC. Both have the same height, h, from A:
Now, as we’ve seen, all points on the angle bisector are equidistant from the two sides of the angle. (This is easily proven with congruent triangles and the angle-side-angel postulate). D is such a point on the angle bisector.
So, if we compute at the areas of these two triangles using the height from D, which is the same for both triangles, we have:
AreaΔABD=(|AB|·h1)/2, and AreaΔACD=(|AC|·h1)/2, so
But as we saw above, AreaΔABD/ AreaADC=|BD|/|DC| , so |BD|/|DC|=|AB|/|AC|
Proof of the Angle Bisector Theorem
(1) AreaΔABD/ AreaADC=|BD|/|DC| //ratio of areas of triangles with same height is equal to the ratio of their bases
(2) AD=AD //Common side, reflexive property of equality
(3) ∠BAD≅ ∠CAD //given, AD is the angle bisector of ∠BAC
(4) m∠DEA=m∠DFA=90° //construction
(5) ∠EDA≅ ∠FDA //(3),(4), Sum of angles in a triangle
(6)△EDA≅FDA //(2), (3), (5), Angle-Side-Angle Postulate
(7) DE=DF=h1 //Corresponding sides of congruent triangles (CPCTC)
(8) AreaΔABD/AreaΔACD=|AB|/|AC| //ratio of areas of triangles with same height is equal to the ratio of their bases
(9) |BD|/|DC|=|AB|/|AC| //(1),(8), transitive property of equality |
What is a migraine?
A migraine is a moderate to severe headache described as a beating pain on one side of the head. It can be accompanied by other symptoms, including sensitivity to sounds and light, or feeling sick (nausea, vomiting).
Although the exact causes of migraines are still unknown, multiple studies have confirmed that release of calcitonin gene-related peptide (CGRP) is increased during acute migraine attacks.
The inhibition of CGRP and its receptor has proved effective in migraine pain relief, which suggests that it does contribute to the onset of a migraine.
What characterises a migraine attack?
Migraine attacks are usually different for different people and can vary also from attack to attack for the same person. Usually these 3 main types of migraine are distinguished:
- Migraine with aura
- Migraine without aura
- Migraine aura without headache, known also as silent migraine.
The ‘aura’ of a migraine refers to sensory disturbances, that can include a wide range of neurological symptoms. It usually occurs between 5 to 60 minutes before the headache arises. Aura symptoms can be: changes in vision, numbness or tingling sensations, weakness and dizziness, or speech and hearing disturbances. Sufferers have reported memory changes, feelings of fear and confusion, and in some rare cases, partial paralysis or fainting.
Stages of a migraine attack
Migraine attacks usually build up in stages:
- Premonitory or warning phase: characterised by tiredness, craving sweet foods, mood changes, feeling thirsty and a stiff neck.
- Aura: which may or may not be present.
- The headache or main attack stage: in this stage patients suffer from head pain, usually on one side of the head. This can be severe or even unbearable.
- Resolution: migraine attacks can fade away or suddenly resolve.
- Recovery or postdrome stage: this stage can last hours or even days and is characterised by a ‘hangover’ type of feeling.
Recognising the different stages of a migraine can be useful to help doctors give the right diagnosis and treatment.
Attacks may differ in length and frequency. Migraine attacks usually last from 4 to 72 hours and most people are symptom-free between attacks.
Treatment of migraine
A migraine cure is not yet available, so treatment aims to relieve symptoms.
Among the medication typically used are:
- Triptans, to help reverse changes in the brain that may result in a migraine attack
- Antiemetics to reduce nausea and vomiting
New migraine medicines
There are currently five new treatments that target CGRP or its receptor. Four have been submitted to the Food and Drug Administration (FDA), and the fifth is in phase III clinical trials. These are:
- Aimovig (erenumab), co-developed by Novartis and Amgen and approved by the FDA on May 17, 2018.
- Eptinezumab (ALD403), developed by Alder Biopharmaceuticals and submitted to the FDA for approval.
- Galcanezumab (LY2951742), developed by Eli Lilly and submitted to the FDA for approval.
- Fremanezumab (TEV-48125), developed by Teva and submitted to the FDA for approval.
- Ubrogepant, developed by Allergan, still in the Phase 3 testing phase. |
Scheduled maintenance - Thursday, July 12 at 5:00 PM EDT
We expect this update to take about an hour. Access to this website will be unavailable during this time.
Falling through ice can result in injuries from the fall, hypothermia, or drowning. Common hazards include:
This document covers general considerations when working on or near ice, for example, when driving on an ice road. For more detailed information about working on ice, see the resources at the end of the document. For information about working in cold temperatures, please see the following OSH Answers:
Fresh water freezes at 0°C, and sea (salt) water freezes at -2°C.
The strength of the ice depends on many factors, including:
For example, solid, clear blue ice forms when water freezes and is generally considered to be the strongest type of ice. White opaque ice (also known as "snow ice" that is formed when there is air trapped in the ice) has a high air content and is not strong.
The strength or integrity of the ice is also affected by:
Essentially, there is no absolutely "safe" ice.
Ice must have a minimum denisty to be considered safe to walk or travel on. The thickness and hardness required increase in proportion to the weight of the load and how it is distributed on the ice sheet.
Ice is constantly changing. In addition to the characteristics of the ice mentioned above, the ability of the ice to support a load depends on:
Calculations and thickness charts from Work Safe Alberta provide guidelines to help determine the thickness, strength and safety of the ice.
In addition, ice bends when a load is placed upon it. While the ice may appear rigid, ice will flex depending on its temperature, the weight of the load, etc. Cracking may occur when the ice is overloaded. Under extreme loads, the cracks can grow and merge to cause the ice to collapse.
If walking or working on foot,
Before travelling on ice, make sure that operators know:
Controls will vary depending on the hazards present, and may include:
Have an emergency plan to be followed in the event of breakthrough. Plan how a rescue of a person who has fallen into the ice and water may be done. In addition, drivers in remote areas should be equipped with appropriate survival equipment and food rations if they are stranded for extended period of time during whiteout conditions.
General tips include:
Also see the OSH Answers on Protection from Drowning for more information.
More information is available from:
(*We have mentioned these organizations as a means of providing a potentially useful referral. You should contact the organization(s) directly for more information about their services. Please note that mention of these organizations does not represent a recommendation or endorsement by CCOHS of these organizations over others of which you may be aware.)
Add a badge to your website or intranet so your workers can quickly find answers to their health and safety questions.
Although every effort is made to ensure the accuracy, currency and completeness of the information, CCOHS does not guarantee, warrant, represent or undertake that the information provided is correct, accurate or current. CCOHS is not liable for any loss, claim, or demand arising directly or indirectly from any use or reliance upon the information. |
On December 1, 1988, 140 nations around the globe marked the first World AIDS Day.
At the time, misinformation about what a diagnosis meant and how the disease spread was rampant.
"There was a stigma of fear," James Bunn, who founded World AIDS Day, explained to NPR in 2011. "There was a lot that people felt they did not know about the epidemic and they were afraid. And they were right to be afraid because of the things that they were hearing."
Unlike many other viruses, the body cannot get rid of HIV over time, so those that contract the virus have it for life.
Before the advent of effective treatments, AIDS was generally considered a death sentence. HIV would almost inevitably progress into AIDS, which is when the immune system is so weakened by the virus that people start picking up infections and illnesses that a healthy person would easily fight off. After developing AIDS, people lived an average of just ten years without treatment.
Today, about 1.2 million people in the US and 35 million people around the world are living with HIV. Most of them still don't know that they have the virus, which is why increasing access to HIV tests — and encouraging people to get tested — is crucial. But deaths from AIDS have been steadily falling since 2004, when the epidemic peaked.
Now, thanks to those efforts, increased focus from the United Nations, a surge in government funding, and the work of many scientists and doctors, there are effective medications accessed by a growing share of those infected and much better awareness about the disease and how it spreads.
"The epidemic frightened us to the core, brought death to our door and opened our eyes to the injustice of stigma and discrimination faced by the most vulnerable people among us," Michel Sidibé, the executive director of UNAIDS, wrote earlier this year. "AIDS changed everything."
A whole host of treatments are now available that reduce the amount of HIV in the blood (a measure called viral load) to levels where it is virtually undetectable. With these treatments, HIV does not progress into AIDS and people around the world can expect to live on average about two decades longer than people diagnosed in 2001.
In wealthy countries especially, HIV has become more like a chronic illness than a death sentence. In the US and Canada, a 20-year-old diagnosed with HIV today is expected to live nearly as long as the average adult.
However, it's important to note that many treatments are still very expensive, and only available to those that can afford them. Less than half of people with AIDS are receiving treatment, according to The Economist, though the UN is actively working to bring treatment to more and more people every year.
In the US, most HIV patients are getting some form of antiretroviral therapy — a cocktail of pills designed to prevent the virus from making copies of itself. It reduces the viral load, suppresses symptoms, and makes it much harder for a patient to pass the virus to someone else. The UN estimates that antiretroviral therapy saved 7.6 million lives around the world between 1995 and 2013.
But it can be a challenge to get regular access to the pills, and the grueling treatment regimen causes long-term liver damage in some patients.
There are some alternatives on the horizon. The FDA just approved a once daily pill called Genvoya that has better long-term safety than many other HIV drugs. And drug developers are working on an injection treatment that patients would only need every four or eight weeks, removing the need the remember to take a pill (or pills) every single day.
Along with the rise of safe sex practices, there are also now pharmaceuticals that can prevent someone from getting the virus in the first place. When used correctly, a pill called Truvada can reduce the risk of HIV infection in people with a high risk of contracting it (those having unprotected sex or taking intravenous drugs) by up to 92%. The problem is that it's only effective if you remember to take it every single day without fail.
Companies are also still trying to create an HIV vaccine that could eliminate the virus once and for all, though some medical professionals are skeptical that it can be done.
While the number of people with HIV is steadily declining, and the effectiveness of treatments is improving, AIDS remains a persistent global health crisis.
And even 27 years after the first World AIDS Day, a diagnosis can still come with a powerful stigma. Charlie Sheen, who recently revealed that he is HIV-positive, said on the "Today" show that he paid people millions of dollars to keep his HIV-positive status a secret.
It's not just celebrities who face a stigma, either. One patient diagnosed in 2007 told NPR: "I was more afraid of the stigma attached to the disease than the actual disease." |
In context: There is no debate that pacemakers and implanted defibrillators save lives. However, they are not without their risks. One peril that implant recipients face is regular battery replacement.
Pacemaker batteries need to be replaced every five to 10 years, which requires surgery. While considered routine, these procedures are expensive and do come with some risks. Infections and complications are inherent in all invasive procedures. However, there may soon be another solution to surgical battery replacement.
Engineers at the Thayer School of Engineering at Dartmouth College have developed a device that can convert the kinetic energy of the heart into electricity. The invention is about the size of a dime and can generate enough voltage to power various types of implants.
“We’re trying to solve the ultimate problem for any implantable biomedical device,” said Dartmouth engineering professor and lead researcher on the project John X.J. Zhang. “How do you create an effective energy source so the device will do its job during the entire life span of the patient, without the need for surgery to replace the battery?”
"Lin Dong, one of the study’s authors is learning the business and technology transfer skills to be a cohort in moving forward with the entrepreneurial phase of this effort."
The breakthrough involves modifying the lead wire of the implant using a material called polyvinylidene fluoride (PVDF). When a thin polymer piezoelectric film of PVDF is combined with a porous structure, a lattice of “buckle beams” is created. This array can be used to convert even minuscule movements into electricity. The energy can then be fed back into the implant’s batteries keeping them charged continually. The researchers also said that these modules could be simultaneously used as sensors for real-time heart monitoring.
The results of the three-year Dartmouth study were just published in Advanced Materials Technologies. The team still has two more years of funding from the National Institutes of Health to complete the pre-clinical trials and to gain regulatory approval. Zhang says this timeframe puts commercial applications of self-charging pacemakers and other implants about five years out.
“We’ve completed the first round of animal studies with great results which will be published soon,” he said. “There is already a lot of expressed interest from the major medical technology companies.”
Lead Image courtesy Thayer School of Engineering |
Computer images are merely a collection of coloured pixels on the screen. But in the binary language of computers, labels such as "red" or "purple" have no meaning. So how do computers identify colours? The answer is that every piece of visual hardware or software uses some kind of "colour space" - a model for representing colours...
The most common colour space you'll encounter is RGB, because it's the way your colour monitor works. Your monitor projects various intensities of red, green and blue light onto a screen - thus the term RGB - to produce the full range of hues and tones. RGB identifies every instance of colour by three numbers, called "channels" . These specify the intensity of red, green and blue as a number from 0 (dark) to 255 (full intensity).
You can combine these channels to make new colours in the same way you would mix paints. Red and green light together make yellow; green and blue make cyan; and blue and red make violet. Pairing unequal values creates the incremental colours in between (e.g. orange is red with a bit of green).
Colour combinations like the ones above produce pure, bright hues. Using equal values in all three channels produces neutrals ranging from black (all channels at 0) to white (all channels at 255). So colour neutralises as the RGB values approach equivalence: increasing all channels at once adds "white ", creating a pale tint; reducing the strongest colours adds "black ", creating a dark shade. As you become accustomed to working with RGB, you'll develop an intuitive sense of the values needed for a given colour.
Most graphics tools let you use HSB (hue-saturation-brightness) which follow the paint-mixing metaphor more directly. Hue is a position on a 360 degree colour wheel, with red at 0, green at 120 and blue at 240. Saturation and brightness are both percentages. 100 per cent equals a pure hue, while adding white and black respectively, reduces them toward 0.
CMYK defines a colour by the amount of cyan, magenta, yellow and black pigment needed to produce it on paper. The four CMYK channels describe colours more precisely than most monitors can display, so it's for high-quality print work.
RGB measures each channel from 0 to 255 because that's the range you get from eight bits of data, and eight bits make a byte. The amount of data used to represent a colour is called "colour depth ".
Colour depth is important in two respects when working with graphics for the Web: the colour depth of your monitor and the colour depth of the files you use to store your images. Monitor colour depth depends on the capacity your display hardware supports and how the software drivers are configured. Your operating system usually provides some sort of control panel to set the display colour depth. File colour depth depends on the file format you use to store your graphics.
Since typical RGB use three eight-bit channels, it adds up to a 24-bit colour depth. When available, full 24-bit colour is called "true colour ". A true-colour monitor displays every pixel's colour exactly. The option often appears as Millions of Colours in monitor settings, since it adds up to 16,777,216 RGB combinations. Likewise, a true-colour image file records the full range of colours precisely.
True colour allows more hues than the eye can distinguish; so most operating systems offer the option of 16-bit high colour (Thousands of Colours on Macintosh). In high colour, the monitor actually displays only 32 distinct levels of red, 32 of blue and 64 of green. The visual difference is almost unnoticeable, but reducing the colour depth to 16 bits per pixel boosts video performance. And running your computer system in high colour won't affect your image data: most applications, such as Photoshop or a Web browser, still use the full 24-bit values. The data gets rounded off only when displayed on the monitor. That's why there are very few high-colour image files.
Older, less powerful computer hardware and certain file formats can handle only eight bits per pixel. An eight-bit colour range is rather small for three channels, so eight-bit environments use indexed colour. With indexed colour images, the system or image file maintains a colour table or "palette " of up to 256 colours. The eight-bit value for each pixel identifies which of those colours to use - the computer equivalent of painting by numbers. Indexed colour lets eight-bit displays and images simulate true colour, since the palette colours themselves are 24-bits deep.
Dithering and antialiasing
Applications that create or display graphics often bump up against the limits of the hardware they run on. Images may have more colours than the monitor can show or details too small for the pixels to render. That's where dithering and antialiasing come in.
Monitors and image files limited to 256 colours can create the illusion of more colours by dithering the available colours in a diffuse pattern of pixels, approximating the desired colour. Dithering is used by operating systems and display applications, such as Web browsers, running on eight-bit monitors. Image editors use dithering to convert true-colour images to indexed colours. Because it can look bad in some situations, most image editors make dithering an option. The alternative to dithering is colour substitution, which uses the closest colour on the palette.
All computers and printers, regardless of colour depth, render pixels in a grid. This creates problems for images that aren't grid-shaped. The strict division between pixels is called "alias" , so certain applications use antiliasing to smooth out the image. This interpolates colours where they meet, creating the illusion of smooth nonhorizontal or nonvertical boundaries.
Antialiased type appears smoother and more legible than pixelated aliased type; antialiased images typically look less blocky and more professional. Image editors usually offer an antialias option for most operations. Just bear in mind that antialiased images tend to require more colours to create the interpolated regions.
Colour matching and gamma correction
One problem with the RGB colour model is that it measures colour relative to the hardware being used at the time. A common complaint among designers - and their clients - is that graphics developed on one platform don't look the same on another. For example, an image that looks great on a PC may appear pale or washed out on a Macintosh.
The problem is that all monitors are not alike and it goes deeper than ambient light or the brightness knob. The relation between RGB values and the actual colour displayed on the screen is almost never linear. For example, a red channel set to 200 should theoretically be twice as bright as a red channel set to 100, but it usually isn't. And the actual relation, called gamma , varies from computer to computer; so even if one colour matches, most of the rest won't.
The images below simulate the differing gamma effects of the PC and the Macintosh:
Raster vs. vector
No, it's not some ancient Greek family tragedy. When you start working directly with image files, the way the image data is recorded determines your options for changing it.
On a computer monitor images are nothing more than variously coloured pixels. Certain kinds of image-file formats record images literally in terms of the pixels to display. These are raster images and you can edit them only by altering the pixels directly with a Bitmap editor. Photoshop and Paint Shop Pro are two of the most popular Bitmap editors.
Vector image files record images descriptively in terms of geometric shapes. These shapes are converted to Bitmaps for display on the monitor. Vector images are easier to modify because the components can be moved, resized, rotated, or deleted independently. PostScript is a popular vector format for printing, but, so far, Macromedia's Flash is the closest thing to a standard vector format on the Web. In an attempt to make it an industry-wide standard, Macromedia opened its Flash file format in April 1998, making it freely available to content and tools developers. The only W3C-supported vector format still under development is Scalable Vector Graphics (SVG).
This distinction can loom large, e.g. when clients or co-workers ask you to alter the text on an image. Chances are the image is stored in a raster formatted image file so the change won't be as easy as they think. You'll have to alter the wording by changing the individual pixels themselves. Bear this in mind when creating images you might have to modify later.
True vs. Web image formats
Any file that is stored on a computer or sent over the Internet is in a specific format. Images are no exception and there are a wide variety of image formats in use today. Your choice of image format is based on a variety of factors such as whether you plan on editing the image in the future, whether you want the smallest possible image for downloading over the Web, or what image editing tools you have at your disposal.
When you want to save or keep a copy of an image for further editing, you need to pick a format that records the image correctly, without losing any details. These are typically called true image formats . As long as you store your original images in a true image format, you can reedit them later without losing any quality.
However, true image formats tend to have large file sizes making them unsuitable for sending over the Internet. For Web images, you want to pick a format that will result in the smallest possible file size. The two most common today are the Graphics Interchange Format (GIF) and the Joint Photographic Experts Group (JPEG). The key is that both of these formats compromise the image for the sake of compression, so you shouldn't use them for original artwork you may want to modify later. (The exception to this is an image with no more than 256 colours, which can be safely stored as a GIF.) Most image editors offer a Save As or Export command to let you safely create separate GIF or JPEG versions for posting on the Web, saving the original in a true image format.
True image formats
A true image format accurately stores an image for future editing. There are dozens, if not hundreds, of existing true image formats and picking the right one depends on which editing tools you plan on using, as well as whether you need to share the files with others who might use a different set of tools.
Every major computer operating system has its own native image format. Applications written for a given operating system are almost guaranteed to support that format, so you can play it safe if someone needs the image and you know the platform they use. Windows and OS/2 use the BMP format, while Macintosh prefers the PICT format. Unix has less of a standard but X Windows and similar interfaces favour XWD files. All of these formats support full 24-bit colour but can also compress images with sufficiently few colours into eight-bit, four-bit, or even one-bit indexed colour images.
TIFF (Tagged Information File Format) is a loss-free, 24-bit colour format intended for cross-platform use and tends to be accepted by most image editors on most systems. The only drawback is that TIFF has evolved into several incompatible versions, so different image editors may not be able to read each other's TIFF files. But recent versions of popular applications such as Photoshop and CorelDraw should have no problem.
By far the most promising loss-free format is PNG, the Portable Network Graphic. It accurately compresses 24- or even 32-bit colour images - the latter of which are 24-bit images with an added eight-bit alpha, or transparency, channel. It also indexes images with 256 or fewer colours for further compression and supports gamma correction. Best of all, it's intended to be a Web format. Although only the most recent applications properly read or create PNGs, the 4.0 browsers already support the format, albeit incompletely.
Web image format: GIF
CompuServe's GIF (Graphics Interchange Format) compresses images in two ways: first, it uses something called Lempel-Ziv encoding, which counts rows of like-coloured pixels as a single unit. Second, it limits itself to indexed colour. This means that a GIF can have no more than 256 colours, so you may have to reduce the colours in your images to use it. That's why GIF doesn't work well for photographic or high-colour images.
GIFs with sufficiently few colours realize greater compression: 128 or fewer colours are referenced with seven-bit data; 64 or less with six-bit data; and so on, down to a one-bit, two-colour GIF. This makes GIF an optimal format for simple line art and that means there are limits and rewards to adding or removing colours.
GIF has a few unique features. A GIF file can contain several images and a duration value for each one to produce animations. It also has limited transparency: one colour in an image's palette can be designated as such. This is an either/or arrangement; pixels with colours close to the transparent one will not be partially transparent.
Web image format: JPEG
The JPEG (Joint Photographic Experts Group) format supports full 24-bit colour. It compresses images by accurately recording the brightness of each pixel but averaging out the hues, which our eyes distinguish less accurately. In effect, it records a description of an image, not the literal composition of that image. The viewer's Web browser or graphics application decodes this description into a Bitmap that looks more or less like the original image.
The accuracy of the reconstructed image depends on how much compression is applied - a value you can choose in most JPEG-savvy, image-editing tools. The decoded hues are rendered in sample blocks with diffused shapes. Since these blocks tend to overlap, it's very difficult - and takes a lot of data - to produce a distinct boundary between colours. But this technique works very well for photographic images with gradual colour changes and no sharp edges. Tropical birds, for example, are particularly well suited to the JPEG format. On the down side, JPEGs are notoriously difficult to edit. If you open a JPEG and modify it, you're modifying the interpreted bitmap rather than the JPEG data itself. Resaving as a JPEG will put the interpreted bitmap, defects and all, back through the encoding process, and the resulting image will be further degraded. Never resave a JPEG if you don't have to.
One more caveat: for high-quality printing, the JPEG format supports pixel resolutions besides 72 dots per inch (dpi). On the Web, anything over 72 dpi is a waste - there's no benefit to higher resolutions as there is when printing onto paper. When saving an image as a JPEG, be sure and double-check the resolution of the image. |
You may have heard that something is "encoded in your DNA." What does that mean?
Nucleic acids . Essentially the "instructions" or "blueprints" of life. Deoxyribo nucleic acid , or DNA , is the unique blueprints to make the proteins that give you your traits. Half of these blueprints come from your mother, and half from your father. Therefore, every person that has ever lived - except for identical twins - has his or her own unique set of blueprints - or instructions - or DNA.
A nucleic acid is an organic compound, such as DNA or RNA, that is built of small units called nucleotides . Many nucleotides bind together to form a chain .
Structure of Nucleic Acids
Each nucleotide consists of three smaller molecules:
- phosphate group
- nitrogen base
If you look at the Figure below , you will see that the phosphate binds to the sugar which then binds to the nitrogen base. |
Uranus is a pretty lonely place. It hasn’t received a visitor since a 5 hour fly past in 1986, when Voyager 2 stopped by to stock up on information about the strange world; and it found that Uranus is a very weird world indeed. Temperatures on the surface drop to a brisk -224 degrees Celsius, making it one of the coldest places we know, and it has 2 sets of huge rings that encompass the planet, along with 27 moons named after Shakespearean characters. Uranus is a huge planet that spins slowly on its side and gets overlooked in terms of mission priority, as NASA, international space agencies and private companies’ race towards the Moon and Mars.
Uranus has 27 moons in orbit, divided into three groups: thirteen inner moons, five major moons, and nine irregular moons, and all named after Shakespeare characters: Cordelia, Ophelia, Bianca, Cressida, Desdemona, Juliet, Portia, Rosalind, Cupid, Belinda, Perdita, Puck, Mab, Miranda, Ariel, Umbriel, Titania, Oberon, Francisco, Caliban, Stephano, Trinculo, Sycorax, Margaret, Prospero, Setebos, Ferdinand.
Despite having Neptune and Uranus in our solar system, we know very little about ice giants. We know that they are mainly composed of oxygen, carbon, nitrogen, and sulfur, and that they have rocky cores and unusual magnetospheres, but that’s about it.
“The need to explore the ice giants is imperative—they are the least-explored class of planet. The structure and composition of these planets differ significantly from the gas giants [like Jupiter and Saturn]. Current interior models disagree with models of solar system formation on the expected size of the core. The unique magnetic field orientations and dynamo generation have not been well characterized.” – Arizona University researchers
Another interesting phenomenon that is currently occurring on Uranus is the heating of the upper atmosphere. Astronomers and scientists are at a loss to explain the strange heating, which doesn’t make sense in terms of the way we believe the ecosystem on the planet to work. One theory suggests that huge Uranian storms are causing the atmosphere to heat up at a very quick rate, something witnessed on both Saturn and Jupiter.
Many think one of the reasons that Uranus gets overlooked in terms of mission priority is because of the lack of visits. One team of researchers and astronomers from Arizona University have developed an idea about a new mission to Uranus, that they think would get all the information we need about the planet and ice giants. The project name is Oceanus and they believe that a 2030 launch would get the probe to Uranus by 2041. The orbiter would study the planet in detail never seen before. The mission would boost the planet’s public image and it might go from the subject of jokes, to the latest buzz in the astronomical society.
“In my opinion, the simplest answer why Uranus is ignored [in the media] is because there hasn’t been a space mission to Uranus since the Voyager 2 mission. I was at JPL for the months around that encounter: January 24, 1986. However poor Uranus’ special encounter was eclipsed even then.” – to Amara Graps – Scientist at the Planetary Science Institute.
Just a few days after Voyager reached Uranus, the Challenger Shuttle tragically broke apart 73 seconds into its flight, killing all seven crew members. All NASA coverage was tied up with the tragic disaster.
“All of us scientists experienced that emotional roller-coaster too. On that day, we were high on the latest results from Voyager, and then upon watching the Challenger lift-off and subsequent explosion on NASA TV, numb with grief. The press went on with double duty reporting both, but Uranus never really got its full day in the public’s eye.”
All they need now is for NASA to approve the mission, deliver the funding and prepare the equipment for a new mission to Uranus. Easy right?.. Whilst we might not be settling on Uranus any time soon, we can learn a great deal about the atmosphere and ecosystems of other worlds by studying those closer to home. The information is likely give us plenty of surprises, and plenty of useful information. |
the most common symptoms include:
Shortness of breath with activity, a Persistent, cough, Coughing up blood, unexplained Weight loss, Chest pain and death
What Is Lung Cancer?
Lung cancer originates in the tissues of the lungs or the cells lining the airways (the bronchi). These cells begin as and look like lung cancer cells under the microscope, with the exception of changes that occur in the process of becoming cancerous.
If lung cancer spreads to other regions of the body, the cells are still lung cancer cells. For example, if lung cancer spreads to the brain, cells taken from the metastasis (growth) in the brain would be identifiable as cancerous lung cancer cells under the microscope. In contrast, some tumours begin in other parts of the body and spread (metastasize) to the lungs.
This is referred to as metastatic cancer to the lungs and not lung cancer. An example would be a breast cancer which spreads to the lungs. This would not be called lung cancer, but rather “breast cancer metastatic to the lungs.”
Lung cancer is the leading cause of cancer deaths worldwide, with 1.8 million new cases being diagnosed yearly.
It is also the most fatal cancer in men, killing more men than prostate cancer, pancreatic cancer, and colon cancer combined. Overall, 27 percent of cancer deaths in the U.S. are due to lung cancer.
Before anyone dismisses these numbers as due to smoking alone, it’s important to point out that even if smoking were banned today, we would still have lung cancer. Lung cancer in never-smokers is the sixth leading cause of cancer deaths in the United States. In fact, the focus on smoking cessation as a way to treat lung cancer has, in some ways, overshadowed research looking into other causes.
How Does Lung Cancer Begin?
Lung cancer usually begins several years before it causes symptoms and is diagnosed. Cells in the lungs may become cancer cells after going through a series of mutations which transform them into cancer cells. Gene mutations - or changes in the DNA of the cells - may be inherited (as a hereditary predisposition) or acquired (damaged as the result of exposure to carcinogens (cancer-causing substances) in the environment. This accumulation of mutations is one of the reasons for a common finding with lung cancer: Many people develop lung cancer though they have never smoked, and some people smoke their whole life and never develop lung cancer.
Lung cancer begins - a tumour originates - when a mass of cancer cells becomes immortal in a way; cells dividing and multiplying out of control. Our normal cells are regulated by a series of checks and balances.
Who Gets Lung Cancer?
The average age for lung cancer is 70, and 80 percent of people who develop lung cancer have smoked, but:
Lung cancer occurs in non-smokers - And while lung cancer in men who have smoked is decreasing, lung cancer in non-smokers is increasing. It’s estimated that 20 percent of women who develop lung cancer in the U.S. have never smoked, and that number increases to 50 percent worldwide.
Lung cancer occurs in young adults - It’s estimated that 13.4 percent of lung cancers occur in adults under the age of 40. While this number may seem small, when compared to the incidence of lung cancer overall, it is not. Calculating this out, around 21,000 young adults will die from lung cancer this year (again comparing this to 40,450 breast cancer deaths for women of all ages.) In addition, women are more likely than men to develop lung cancer at a young age, and lung cancer in young adults is increasing.
Types of Lung Cancer?
There are two primary types of lung cancer:
Non-small cell lung cancer is most common, being responsible for 80 to 85 percent of cancers. This is the type of lung cancer more commonly found in non-smokers, women, and young adults.
Small cell lung cancer is responsible for around 15 percent of lung cancers. These lung cancers tend to be aggressive and may not be found until they have already spread (especially to the brain). They usually respond fairly well to chemotherapy but have a poor prognosis.
Non-small cell lung cancer is further broken down into three types:
Lung adenocarcinoma - Lung adenocarcinoma is responsible for half of non-small cell lung cancers and is currently the most common type of lung cancer. It is also the most common type of lung cancer found in women, young adults, and in people who do not smoke.
Squamous cell carcinoma of the lungs - Squamous cell lung cancer was once the most common type of lung cancer, but its incidence has decreased in recent years. Part of the thought is that the addition of filters to cigarettes created this shift. Squamous cell cancers tend to occur in or near the large airways - the first place exposed to smoke from a cigarette. Lung adenocarcinomas, in contrast, are usually found deeper in the lungs, where smoke from a filtered cigarette would settle.
Large cell lung cancer - Large cell carcinomas of the lungs tend to grow in the outer regions of the lungs. These are usually rapidly growing tumors that spread quickly.
Other, less common types of lung cancer include carcinoid tumours and neuroendocrine tumours.
Signs and Symptoms of Lung Cancer
Having an awareness of the early signs and symptoms of lung cancer is a must for everyone for two reasons:
There is not a screening test available for everyone, so the only way that most people have to find these cancers early—when they are most treatable—is knowing the signs. Recent research tells us that the majority of people in the United States are not familiar with these symptoms.
Because lung cancer is common. As noted earlier, lung cancer is the leading cause of death in both men and women and anyone who has lungs is at risk.
Overall, the most common symptoms include:
shortness of breath with activity
a persistent cough
coughing up blood
unexplained weight loss
Of note is that the types of lung cancer have been changing over the years, and with that, the most common symptoms. In the past, lung cancers such as squamous cell carcinoma and small cell lung cancer were most common. These cancers tend to grow near the large airways of the lungs and cause symptoms early on - commonly a cough and coughing up blood. Now, lung adenocarcinoma, a tumor which tends to grow in the outer regions of the lungs is most common. These cancers tend to grow for a long time before causing symptoms, which may include mild shortness of breath, subtle weight loss, and a general sense of being unwell.
Diagnosis and Staging
A combination of imaging studies, including CT, MRI, and PET scans may be used to diagnose lung cancer. In addition, a lung biopsy is usually needed to determine the type of lung cancer.
Careful staging - figuring out how extensive a lung cancer is - is important in designing a treatment regimen. Non-small cell lung cancer is broken down into five stages: stage 0 to stage IV. Small cell lung cancer is broken down into only two stages: limited stage and extensive stage.
How Does Lung Cancer Grow and Spread?
One of the differences between benign lung tumours and lung cancer, as noted, is that lung cancer cells have the ability to break off and spread to other regions of the body. This spread, in fact, is the cause of most cancer deaths. One of the differences between cancer cells and normal cells is that cancer cells lack “stickiness.” Normal cells produce substances that cause them to stay together. Without this stickiness, lung cancer cells can travel and grow in other regions, as well as invade nearby structures.
There are four primary ways in which lung cancers spread. It can “invade” tissues locally. Unlike benign tumours which may push up against nearby tissues, cancers actually penetrate nearby tissues. This is the reason for the name “cancer,” which is derived from the word crab; cancer can send crablike extensions into nearby tissues.
Lung cancer cells can also break away and spread through either the bloodstream or the lymphatic system to distant sites. In recent years, it’s also been found that lung cancer may travel and spread through the airways in the lungs.
Lung Cancer Treatments
Treatment options for lung cancer have improved significantly in recent years. These include:
Surgery - There are several types of lung cancer surgery which may be done, depending on the size and location of a tumor.
Radiation therapy - Radiation therapy may be given as an adjunct to surgery, to decrease pain or airway obstruction due to a cancer, or in high doses to a localized region in an attempt to cure cancer (stereotactic body radiotherapy.)
Chemotherapy - Chemotherapy usually uses a combination of medications to treat lung cancer.
Targeted therapies - Everyone with lung cancer should have molecular profiling (gene testing) done on their tumors. Targeted therapy drugs are currently available for people who bear tumours with several genetic mutations including EGFR mutations, ALK rearrangements, and ROS1 rearrangements. Contact your oncologist for an updated list of mutations.
Immunotherapy - In 2015, two immunotherapy drugs were approved for the treatment of lung cancer. In some cases, these drugs have resulted in long-term survival even for those with the advanced stages of lung cancer.
A relatively new type of cancer care is termed palliative care. Palliative care is care designed to address the full spectrum of medical needs for people with cancer, including physical, emotional, and spiritual support. Unlike hospice care, palliative care can be used for anyone, even if you have a cancer which is considered curable. Early studies have found that, in addition to improving the quality of life for people, this care may also improve survival.
Where in the Gambia?
For information, diagnose and treatments At EFSTH, MRC, number of NGO and Private Clinics. Also France De Gaulle Njie Foundation Web site, Email to [email protected], send text only to Dr Azadeh 002207774469/3774469.
Author: DR AZADEH Senior Lecturer at the University of the Gambia, senior Consultant in Obstetrics & Gynaecology, Clinical Director at Medicare Health Services. |
Bacteria may lack a true immune system, but this does not leave them defenseless against bacteriophage viruses and other pathogens. A system of genomic sequence elements called clustered regularly interspaced short palindromic repeats (CRISPR) and various CRISPR-associated proteins (Cas) help to recognize and destroy foreign genetic material delivered by such invaders.
An international research group led by Akeo Shinkai from the RIKEN SPring-8 Center and John van der Oost of Wageningen University in the Netherlands has now dissected one such CRISPR-Cas pathway, revealing functional insights that also highlight important differences in how these systems operate across bacterial species.
The researchers focused their attention on Thermus thermophilus, a bacterium that thrives at high temperatures and features a relatively simple and compact genome, making it amenable to experimental work. Of the bacterium's multiple CRISPR-Cas pathways, the researchers explored the pathway known as subtype III-B, which targets foreign RNA rather than DNA.
Click "source" to read more. |
This is especially great for Middle School and Elementary School Spanish classes. Teach students the words: "Vive, visita, el bebe, esta sorprendida!" using a Total Physical Response and Storytelling approach. First, introduce the words in the PowerPoint and create a corresponding motion for these words. Then, show them the pictures in the PowerPoint to see if they can generate the correct Spanish word. Once they have internalized this vocabulary, move on to the Mad-Libs style story. Children LOVE to create the story and volunteer information to make the story humorous. This is an excellent way for students to really establish meaning with the vocabulary and internalize it. Afterwards, go through the PowerPoint again and have the students act out the story that they have created-a great assessment of student comprehension. |
Every student in Australia in year 3, 5, 7, and 9 are expected to appear in an annual national assessment in reading, writing, language conversation (spelling, grammar and punctuation) and numeracy. This test is known as The National Assessment Program-Literacy and Numeracy (NAPLAN). All government and non-government educational authorities have contributed to the development of NAPLAN material. Although NAPLAN is not part of Australian curriculum, NAPLAN tests provide a snapshot of how kids around the country answer a particular set of math and English test questions in one day.
What are the Benefits of NAPLAN?
NAPLAN is a measure through which the government, educational authorities, schools, parents, and teachers can determine whether or not young Australian have literacy and numeracy which is the base of any other learning, and for their productive and rewarding participation in the community. NAPLAN provides information on how students are performing which helps teachers and students to improve the sector the students are poor in. It also gives schools and the system the ability to measure their students’ achievements against national minimum standards and compare student performance across states. NAPLAN tests are one aspect of each school’s assessment and reporting process and do not replace the extensive, ongoing assessments made by teachers about each student’s performance. Students not being allowed to use any electronic devices during their tests, make it completely fair to judge them. They write what they know. NAPLAN does have the potential to shed interesting light on students’ learning at a point of time and offer valuable information about what needs to be done next to improve their literacy and numeracy.
Not only does NAPLAN benefit the educational authorities, teachers and parents, but in some way, it also benefits the students who appear for this test. Students get prepared for this test, making them more capable in literacy and numeracy. As we all know literacy and numeracy are crucial for any basic as well as advanced work that we do in our daily lives.
NAPLAN tests and NAPLAN Results 2017
NAPLAN tests take a long time and involve experts from across Australia. Specialist writers are engaged in developing test questions. Many test items are trialled by small samples of students to inform decisions about which items will be taken in the final test. NAPLAN tests are conducted at schools and are administered by classroom teachers, school deputies or the principal. Students answer to multiple-choice questions (MCQs), which will be scanned and the data is captured electronically.
The data from NAPLAN is collated onto the My School website. The Australian Primary Principal Association does not support the publication of results that allow for intra-school comparisons. Some schools are using the results from NAPLAN on an individual basis when making decisions regarding enrollment. Test results are directly linked to the federal funding agreement with the states.
Although some may criticise this means to judge one’s literacy and numeracy, but this is a reliable means to know student’s true ability and potential. By taking tests in every corresponding year 3, 5, 7, and 9 the government, educational authorities, teachers and parents can well very well how the students well the students are performing. Not being able to use any means of electronic devices for means of calculator defines the student’s numeracy, which is essential for the basic daily purpose.
Link to My School Website: https://www.myschool.edu.au/ |
Why does the Sun rise in the east and set in the west?
The Sun, the Moon, the planets, and the stars all rise in the east and set in the west. And that's because Earth spins -- toward the east.
For a moment, let us ignore Earth's orbit around the Sun (as well as the Sun's and solar system's revolution around the center of the Galaxy, and even the Galaxy's journey through the universe). For the moment, let us just think about one motion - - Earth's spin (or rotation) on its axis.
Earth rotates or spins toward the east, and that's why the Sun, Moon, planets, and stars all rise in the east and make their way westward across the sky. Suppose you are facing east - the planet carries you eastward as it turns, so whatever lies beyond that eastern horizon eventually comes up over the horizon and you see it!
People at Earth's equator are moving at a speed of about 1,600 kilometers an hour -- about a thousand miles an hour -- thanks to Earth's rotation. That speed decreases as you go in either direction toward Earth's poles. In the state of Texas, you'd moving at about 1,400 kilometers an hour due to rotation. If you're in southern Canada, you're moving at only about a thousand kilometers an hour. Now think about what would happen if you stood exactly at the North Pole. You'd still be moving, but you'd be turning in a circle as Earth spins on its axis.
You may wonder why you don't feel this speed: it's because human beings have no 'speed organs' which can sense absolute speed. You can only tell how fast you are going relative to something else, and you can sense changes in velocity as you either speed up or slow down. But we cannot really tell whether or not we are just moving at a constant speed unless something else tips us off!Think about this:
Suppose you are in a car traveling down the road. How can you tell how fast you are going? The speedometer tells you how fast your wheels are turning, but you could be standing dead still, spinning your wheels trying to get off a patch of ice, so let's remove the speedometer from the car. As you go faster, your car may vibrate more because it's working hard, but these vibrations only tell you the car is working hard, not what velocity you are moving at. So get a good car and some cushions to remove the vibrations you feel. Then get some good earplugs so any misleading sounds won't distract you. And paint your car windows black so that the motion of objects relative to you don't throw you off. Remember that when it comes to the rotation of the Earth, everything around you is moving at the same speed - you, the trees, the houses, your pet dog, everything. OK, now, how fast are you going? You have no way to tell. You don't feel like you're moving. You feel just as you would if you were standing still! Human beings have no ability to tell absolute motion.
The StarChild site is a service of the High Energy Astrophysics Science Archive Research Center (HEASARC), Dr. Alan Smale (Director), within the Astrophysics Science Division (ASD) at NASA/ GSFC.
StarChild Authors: The StarChild Team
StarChild Graphics & Music: Acknowledgments
StarChild Project Leader: Dr. Laura A. Whitlock
Responsible NASA Official: |
The Majiayao Culture is one of the most important Neolithic cultures in Chinese prehistory. In 1923 the Swedish geologist and archaeologist, Johan Gunnar Andersson, excavated several archaeological sites, including Banshan and Machang in Gansu province in northwest China. He collected a large number of distinctive painted pottery vessels, which are today housed in the Museum of Far Eastern Antiquities in Stockholm. Dr. Andersson and his colleagues first affiliated these cultural materials with the Yangshao culture of central China, located well to the east of where he was excavating. The finds became well-known in the Western world for most of the 20th century as representative examples of this Chinese Neolithic culture. After 1949, Chinese archaeologists carried out further archaeological investigations in the region and re-defined this material as Majiayao culture, in order to reflect its separate existence in the upper reaches of the Yellow River (an area that covers parts of Gansu, Qinghai, and Ningxia provinces).
As a result of further archaeological discoveries in the later 20th century, the Majiayao culture is now recognized as a major Neolithic manifestation in the northwestern region of China, dated approximately to 3300 - 2000 BC. Today, over 2,200 Majiayao sites have been identified, but only about 50 of them have been excavated. Archaeological evidence implies that the Majiayao culture was a simple egalitarian society with an economy based on farming (made possible by the domestication of millet) and animal husbandry. The Majiayao culture is especially well-known for its mass production of pottery. The most characteristic artifacts are large pottery vessels embellished with spiral circles, undulating lines and geometric patterns painted in black-and-red (and sometimes white) on the top part of the vessels. New archaeological research suggests that Majiayao painted pottery was influenced by Yangshao pottery designs from central China, and reached its peak during the 3rd millennium BC, when painted pottery disappeared in other parts of China. Based on changes in pottery forms and designs, Majiayao ceramic production is further divided into three phases: Majiayao (3300 - 2600 BC), Banshan (2600 - 2300 BC), and Machang (2300 - 2000 BC).
Source: Royal Ontario Museum 2006 |
Ratios and Proportions: Proportions and Percents
Students will learn how to use proportions to solve problems with percentages.
• Students will review the basics of percents.
• Students will learn to identify and solve four different types of percentage problems.
• Students will practice solving those four types of percentage problems.
Sixth Grade - Seventh Grade - Eighth Grade - Ninth Grade - including special education students
Print the classroom lesson plan and worksheet questions (see below).
I. Introduction and Review
- "Today we are going to continue our work with proportions. We have been solving them and working with algebraic expressions too. Now, we are going to begin working with proportions and percents."
- "Let's take just a minute and review a little bit about percentages."
- Note: Use this review as a whole class review and discussion. Fill in any of the missing information on percentages with the students. Stress the connection between ratios and percentages given that a percent is compared to 100.
- "Think back now. A percentage is related to fractions and decimals, given that all three of these represent a "part" of a "whole". Sometimes you may have the whole thing, or a part of it. You can convert fractions to decimals and decimals to percents, and back again."
- "Here are some things to remember about percentages. Write these things in your notebooks."
Print this printable worksheet for this lesson: |
East African lakes, group of lakes located in East Africa. The majority of the East African lakes lie within the East African Rift System, which forms a part of a series of massive fissures in the Earth’s crust extending northward from the Zambezi River valley through eastern and northeastern Africa and the Red Sea to the Jordan River valley in southwestern Asia. In East Africa itself the southern, eastern, and western branches of the system can be discerned.
Occupying the Southern Rift Valley is Lake Nyasa (Lake Malawi), which drains into the Zambezi River. Marking the course of the Western Rift Valley are Lakes Tanganyika, Kivu, Edward, and Albert—the first two of which are situated within the drainage basin of the Congo River, while the other two constitute part of the Nile River drainage system. With the exception of Lake Rudolf (Lake Turkana), the lakes found in the Eastern (Great) Rift Valley are smaller than those of the Western Rift and constitute several independent inland drainage basins.
Located in a shallow downwarping between the Eastern and Western Rift highlands is Lake Victoria, which among the freshwater lakes of the world has a surface area that is second only to that of Lake Superior in North America. On a smaller scale, East Africa also includes some fine examples of crater lakes, and on Mount Kenya and in the Ruwenzori (Rwenzori) Range are found glacial tarns, small lakes each of which occupies a basin, or cirque, scraped out by a mountain glacier.
Of the eight largest lakes—Victoria (26,828 square miles [69,485 square km] in area), Tanganyika (about 12,700 square miles [32,900 square km]), Nyasa (11,430 square miles [29,600 square km]), Rudolf (2,473 square miles [6,405 square km]), Albert (2,160 square miles [5,594 square km]), Kivu (1,040 square miles [2,693 square km]), Rukwa (1,000 square miles [2,590 square km]), and Edward (830 square miles [2,150 square km])—only one, Rukwa, in Tanzania, lies wholly within a single political entity. The northern shore of Kenya’s Lake Rudolf is in Ethiopia; Lake Victoria is divided among Uganda, Tanzania, and Kenya. In the west the international boundary between Uganda and the Democratic Republic of the Congo runs through the centre of Lake Albert; the same boundary places two-thirds of Lake Edward in the Democratic Republic of the Congo and the remainder in Uganda. Lake Kivu lies between Rwanda and the Democratic Republic of the Congo; the waters of Lake Tanganyika are shared by Tanzania, the Democratic Republic of the Congo, Burundi, and Zambia. Malawi and Mozambique have territorial waters on Lake Nyasa, and since its independence Tanzania has also advanced claims to its territorial waters because it also occupies a part of the lakeshore.
The surface levels of the lakes on the irregular floor of the Eastern Rift Valley are of varying heights, rising from Lake Rudolf (1,230 feet [375 metres] above sea level) through Lake Baringo (3,200 feet [975 metres]) to Lake Naivasha (6,180 feet [1,880 metres]), after which there is a decrease in height to Lake Magadi (1,900 feet [580 metres]). The Omo River from the Ethiopian Plateau is the only perennial affluent of Lake Rudolf, which is the lowest of the major East African lakes. Although it has the typical elongated form of a rift lake, Rudolf is relatively shallow (240 feet [70 metres] at its centre, although it reaches about 400 feet [120 metres] in a small depression at the southern end), as are the other lakes of the Eastern Rift. Its eastern and southern shores are bounded by rocky margins of volcanic origin; the lower western and northern shores are mostly composed of sandy sediments. South of Lake Magadi the splaying continuation of the Eastern Rift into northern Tanzania is indicated by Lakes Natron, Manyara, and Eyasi.
In the Western Rift Valley the northwestern and southeastern shores of Lake Albert are flanked by steep escarpments; wild ravines and fine cascades form a conspicuous feature of these geologically young tectonic (fault-formed) landscapes, the scale being greater on the Democratic Republic of the Congo side than on the Uganda side. There is a considerable lowland area at the northern end of the lake, where, about 20 miles (32 km) below Murchison (Kabalega) Falls, the Victoria Nile enters Lake Albert, to leave almost immediately as the northward-flowing Albert Nile. The southern end of Lake Albert contains an alluvial flat and a papyrus-choked delta formed by the Semliki River, which both carries the outflow from Lake Edward and provides drainage from the rain-soaked Ruwenzori Range.
Test Your Knowledge
Africa at Random: Fact or Fiction?
Lake Edward, of which the deepest part (367 feet [112 metres]) is in the west under the Congo Escarpment, receives the Rutshuru River as its principal affluent. On the northeast it is connected with Lake George by the 3,000-foot- (915-metre-) wide Kazinga Channel. At an elevation of approximately 3,000 feet above sea level, the surfaces of both lakes are nearly 1,000 feet (300 metres) higher than that of Lake Albert.
Separating the basins of Lake Edward and Lake Kivu are the Virunga Mountains, which thus divide the drainage system of the Nile River from that of the Congo River. With clear water, a broken shoreline, and a mountainous setting, including the relatively recent volcanoes (Nyamulagira and Nyiragongo) of the Virunga Mountains, Lake Kivu possesses outstanding scenic attractions. Its outflow is southward by the turbulent Ruzizi River, which drops more than 2,200 feet (670 metres) on its way to Lake Tanganyika. This latter lake, long and narrow, is second only to Siberia’s Lake Baikal in depth, penetrating at its maximum to 4,710 feet (1,435 metres), which is more than 2,400 feet (730 metres) below sea level. Typical, too, are the flanking escarpments, which often rise sheer from the lake; the only sizable lowland is the lower Ruzizi valley. The drainage of the Malagarasi River system enters Lake Tanganyika about 25 miles (40 km) south of Kigoma, Tanz.; at its southern end, on the frontier with Zambia, the 700-foot (210-metre) Kalambo Falls occur where the Kalambo River tumbles over the escarpment. The overflow to the Lualaba River, a tributary of the Congo, is via the shallow and sometimes obstructed Lukuga River outlet on the western side. To the south and west of Lake Tanganyika’s southernmost extreme is Lake Mweru, situated in northern Zambia.
Britannica Lists & Quizzes
Lake Victoria, with its quadrilateral shape, relative shallowness (maximum depth of about 260 feet [80 metres]), and an area that is more than twice as great, is quite different from Lake Tanganyika. It is set in a region of erosion surfaces instead of tectonic escarpments and is bounded by a shoreline of considerable variety: on the west a straight cliffed coast gives way to papyrus swamp; headlands and deep indentations mark the intricate northern shores; a major inlet, the Winam (formerly Kavirondo) Bay, is located on the east; and on the southern shores the Speke, Mwanza, and Emin Pasha gulfs lie amid rocky granitic hills. Ukerewe, situated in the southeast, is the largest island in the lake; in the northwest the Sese Islands constitute a major archipelago. At the entrance to the channel leading to Jinja, Ugan., lies Buvuma Island. There are numerous other islands, most being of ironstone formation overlying quartzite and crystalline schists. The Kagera River, largest of the affluents, may be considered the most remote headstream of the Nile. The outlet of the lake and the conventional source of the Nile is at Jinja, where, after flowing over the now-submerged Ripon Falls, the Victoria Nile begins its journey toward the Mediterranean Sea through the sluices of the Nalubaale and Kiira dams at Owen Falls.
Lake Rukwa is situated in a northwest-southeast-trending side rift, parallel with the southern part of the Lake Tanganyika rift and continuing the structural alignment of the northern end of Lake Nyasa. Rukwa lies on flat alluvium (soil, gravel, sand, and stone deposited by running water) and is extremely shallow (20 feet [6 metres] at the greatest depth); any change of surface level causes great fluctuations in its area. Southeast of Lake Rukwa, beyond the volcanic mass of Rungwe Mountain, Lake Nyasa, third in size among the East African lakes, has the same characteristics as Lake Tanganyika but in less-extreme form. It is deepest in the north (2,310 feet [704 metres]), where on the Tanzanian side the Livingstone Mountains rise precipitously from the lake surface. In the northwest, however, there is a well-defined alluvial plain. From the east come the waters of the Ruhuhu River, and numerous streams flow across the Malawi Plateau to the western shore. In the shallower southern part there are several lake plains and sandy beaches. The Shire River outlet at the southernmost end has an extremely small gradient in its upper section, but in its middle course the river is interrupted by cataracts before emptying into the Zambezi. Other lakes in the region include Lakes Chilwa, Chiuta, and Phalombe, in southern Malawi.
Geology, climate, and hydrology
The East African rifts attained their present form mainly as a result of earth movements during the Pleistocene Epoch (about 2,600,000 to 11,700 years ago), and the lakes must have been formed after the landscapes in which they are set. The shallowness of such lakes as Albert (maximum recorded depth 190 feet [58 metres]) and Edward (367 feet [112 metres]) is the result of the thick layers of sediment upon which they rest. In some lakes too, volcanic activity has played a part in blocking drainage and shaping shorelines. Raised beaches indicate that the lake levels were higher and their surfaces were more extensive during rainy phases of the Pleistocene Epoch. In the Eastern Rift, for instance, Lakes Rudolf and Baringo were formerly part of one lake, from which there was a link via the Sobat River with the White Nile. Subsequent drier conditions caused the eastern lakes gradually to dwindle in size, with many fluctuations, to their present independent status.
The lakes of the Western Rift have experienced their own geologic changes. Before the geologically recent blocking of the rift by the eruption of the Virunga Mountains, the drainage of Lake Kivu was probably northward into the Nile. The fossil record suggests that the organic content of Lakes Edward and George was considerably reduced during a period of intense volcanic activity around their shores, although several species of fish (such as Tilapia nilotica) appear to have survived. About 100,000 years ago, when the rise in the shoulders of the Western Rift resulted in the reversal of the westward-flowing drainage of such rivers as the Kagera, Katonga, and Kyoga-Kafu, Lakes Victoria and Kyoga were formed by water diverted from the northern section of the rift. Eventually, however, the drainage of most of Uganda returned to the Western Rift and to the Nile. The subsequent reductions in the level of Lake Victoria are indicated by a series of strandlines around its shores.
Those East African lakes that lie in inland troughs at altitudes of about 2,000 feet (610 metres) or less have a hot, dry climatic environment with a high potential evaporation. In the higher parts of the rift floors, however, climatic conditions approach those of the flanking highlands. In the Western Rift moist air from the Congo basin is a source of the more humid conditions prevailing over Lakes Tanganyika and Kivu. The glacial tarns of Mount Kenya and the Ruwenzori Range are in the frigid zone.
Large lakes tend to create or influence their own climates, and this effect is most marked on the western and northern margins of the immense mass of Lake Victoria. There, in a zone 30 to 50 miles (48 to 80 km) wide, temperatures rarely rise above the low 80s F (high 20s C) or fall below the low 60s F (mid-10s C), and precipitation is well distributed throughout the year. Moreover, annual precipitation is high, being heaviest over the lake and decreasing inland from an average of 50 to 60 inches (1,300 to 1,500 mm) at the lakeshores. At the northern end of Lake Nyasa, annual precipitation of about 120 inches (3,050 mm) results from similar influences reinforced by air convergence caused by the funnel-shaped relief at the head of the lake. Sudden and dangerous storms are likely to arise over the waters of all the major lakes.
The levels of the East African lakes are perceptibly sensitive to climatic fluctuations. Average seasonal ranges of level are small: no more than 1 foot (0.3 metre) on Lake Victoria, 1.3 feet (0.4 km) on Lake Albert, and 3 to 4 feet (0.9 to 1.2 metres) on Lake Nyasa. Longer-term fluctuations, with consequential effects on the shorelines, are greater; during the 20th century the extreme range recorded on Lake Victoria was 10.3 feet (3 metres), compared with 17.3 feet (5 metres) on Lake Albert and 18.8 feet (5.7 metres) on Lake Nyasa. In each case the recorded maximum occurred in the early 1960s. The effects of drought are enhanced in small, shallow lakes: Lake Nakuru, in Kenya, dried up completely in 1939–40; at the end of 1949 Lake Rukwa was estimated at one-fifth its normal size; and Lake Chilwa, in Malawi, suffered a drastic reduction of area in 1967–68.
There is a significant correlation between precipitation and variations in the level of Lake Victoria, and Lakes Kyoga and Albert follow—with an appropriate time lag—the conditions of Lake Victoria. In addition to rainfall, other factors affect the longer-term fluctuations of Lakes Tanganyika and Nyasa. The Lukuga River outlet of Lake Tanganyika tends to become blocked intermittently by silting, consolidated by swampy growth. Similarly, in the flat valley of the upper Shire River, periods of erosion alternate with the building of bars of silt and sand reinforced by reeds. As a result, between 1915 and 1935 there was virtually no outflow from Lake Nyasa.
Many of the lakes, especially those in the Eastern Rift, are brackish, but Baringo and Naivasha are exceptions in that they are freshwater lakes and are believed to have subterranean outlets. At the other extreme of salinity, Magadi is a soda lake, in which the continuing source of sodium carbonate (natron) appears to be alkaline waters of deep-seated origin. Lakes Edward and George have the highest salinities among the lakes of the Western Rift, but the alkalinity is not excessive. The problem of the deeper lakes, such as Tanganyika, Nyasa, and Kivu, is that their deeper waters are permanently deoxygenated and thus constitute a biological desert: three-fourths of the volume of Lake Tanganyika and 99 percent of that of Kivu are within this category. Moreover, all three lakes contain lethal amounts of hydrogen sulfide in their deeper waters, and Lake Kivu also contains vast quantities of methane.
Plant and animal life
The vegetation setting of the lakes varies from the semidesert, in which Lake Rudolf is situated, to the patches of closed evergreen forest on the western and northern shores of Lake Victoria. Between the two extremes and in accordance with the position of the individual lakes, bushland and thicket, grassland, savanna, or open woodland occur. The oil palm, which is characteristic of western Africa and of the Congo region, is found on the shores of Lake Tanganyika. Heavily populated and intensively cultivated areas marginal to Lakes Victoria and Nyasa present vegetation types that have been much modified by human activity. The lakeshores may consist of open landscapes of headland or beach or may contain plants associated with swamps, such as the giant sedge, Cyperus papyrus, which is the most prevalent.
Among the main genera of fish in the East African lakes are the mouthbreeders Tilapia, much the most important in number of species and in total quantity; Haplochromis (which, like the Tilapia, belong to the Cichlidae family), a group of small perchlike fish; Clarias (barbel) and Bagrus among the catfish; Hydrocynus (tiger fish); various carps, including Labeo, Barilius, and Barbus; Protopterus, a lungfish; Mormyrus, a member of the elephant-snout fish family; and Stolothrissa tanganicae (dagaa), a small sardinelike fish.
The more strongly saline lakes, such as Nakuru, Elmenteita, Manyara, and, above all, Magadi and Natron, have a severely limited fish life. Lake Kivu also has a fish population that is neither varied nor abundant. Although fish are present in enormous quantities in Lake Rukwa, the number of species is not large, and the stock is dominated by the endemic Tilapia rukwaensis. Successive droughts such as that of 1949 explain why there are so few species in Lake Rukwa; the years immediately following 1949, on the other hand, provide an excellent example of the amazing recovery powers of tropical fish populations.
The majority of the lakes, though, have a rich and varied fish life, of which a high proportion of species are endemic to the individual lake. The Cichlidae, for example, are especially prone to form new species, and there are between 100 and 200 species of the family in Lakes Victoria and Kyoga.
Lake Albert has a fish life that is related to that of the Nile; it includes Nile perch, tiger fish, and Polypterus (bichir). The physical barrier of Murchison Falls, situated near the northern end of Lake Albert, marks the frontier of a separate faunal province formed by Lakes Victoria and Kyoga. Several of the Lake Albert genera are not found in these two lakes, which contain many unique species. Similarly, the rapids on the Semliki River have prevented the introduction of fish species from Lake Albert into Lakes Edward and George, which otherwise are particularly rich in fish. On the other hand, the presence of Nile perch, tiger fish, and bichir in Lake Rudolf serves to indicate its former connection with the Nile. The transplantation of fish by humans, however, has caused a man-made zoogeographic revolution, the full effects of which cannot yet be discerned.
The hippopotamus is ubiquitous around the lakeshores, except those of Lake Kivu; the crocodile is also widespread, although absent from Lakes Edward, George, and Kivu, each of which is sheltered from the spread of this reptile by falls in the outflow river, with cool mountain torrents and sunless forest as additional deterrents. Traditionally there has been an inverse relationship between the density of game and that of human settlement. The establishment of national parks and game reserves, however, has encouraged the increase of game population, although widespread poaching has created serious problems. Among the variety of game to be seen in the neighbourhood of the lakes are elephant, buffalo, and various antelopes.
Among the resident and migrant birds in evidence, waterfowl are especially noticeable. Several of the Eastern Rift lakes—such as Nakuru, Elmenteita, and Manyara—have historically been famous for their vast congregations of flamingos. Forming the basis of a national park in which the emphasis is on aquatic birds, Lake Nakuru is an ornithologist’s paradise; Lake Edward and the Kazinga Channel are also notably rich in birdlife. The fish-eating birds—cormorants, darters, and kingfishers—are also part of the ecology of the lakes. |
The Republic of the United Provinces
The Netherlands emerged as a distinct political entity in the late 16th century, when religious and economic suppression by the ruling Habsburgs, by then the kings of Spain, led to the revolt of the Low Countries, under the leadership of William of Orange. The seven northern, predominately Calvinist, provinces proclaimed independence in 1579, while the Catholic south (now Belgium) remained under Habsburg rule. After a war that spanned 80 years, the independence of the Republic of the United Provinces was formalised by treaty in 1648. A confederacy governed by an assembly of seven sovereign provinces (the Staten Generaal), the Republic of the United Provinces, was ruled by the House of Orange and the leaders of the Republic of Holland (by far the wealthiest province).
The golden century and the return of the monarchy
The Dutch provinces became the leading maritime nation in the world during the 17th century, which is regarded as the "golden century" of Dutch history. It was a prosperous period and an era of great artistic and intellectual achievement, especially in architecture and painting, philosophy and the natural sciences. Colonisation by the Dutch East India Company took place in Indonesia, south India and Ceylon (now Sri Lanka) and under the West India Company in the West Indies.
The 18th century was a period of stagnation for the United Provinces. From 1795 to 1813 the region was ruled by France, first as a protectorate (the Batavian republic) and from 1806 as the Kingdom of Holland under Louis Bonaparte. After the defeat of Napoleon in 1814 the House of Orange was restored and has since remained. The new Kingdom of the Netherlands initially included the southern Low Countries, but historical and religious differences were exacerbated by William I's authoritarian government. The Belgian revolution of 1830 was supported by other European powers and resulted in the formation of the Kingdom of Belgium in 1831.
The development of democracy
The series of revolts across Europe that spread to the Netherlands in 1848 sapped the confidence of the Dutch monarchy, which in turn culminated in the revision of the constitution to establish a parliamentary democracy, leaving only nominal powers to the monarch. As the originally restricted electorate was gradually expanded, political parties began to emerge in the late 19th century. Universal male suffrage was established in 1917 and women were given the vote in 1919.
From 1917 onwards a party system developed, based on segmented confessional and ideological backgrounds. Dominating this system were the confessional parties representing Roman Catholics and Calvinists; the other two main pillars were the liberals and the socialists. Increasing disengagement between these groups led to what became known as the zuilen or "pillarisation" of Dutch society. Schools, trade unions, business groupings and sports clubs were strictly organised along Roman Catholic, Calvinist, liberal and socialist lines.
The Netherlands' successful policy of neutrality during the first world war (1914-18) could not be sustained during the second world war (1939-45), and the country was occupied by Nazi forces in 1940. Queen Wilhelmina and the government were forced into exile and the Jewish community was virtually wiped out.
Post-war recovery followed by 1970s recession
The immediate post-war years were difficult, but the Dutch economy made a quick recovery, benefiting from some US$1bn in aid under the US's Marshall Plan and an improving European and world trading environment. Full employment and the discovery of gas resources also helped the Netherlands to build one of the most extensive welfare systems in post-war Europe. After a protracted struggle (which drained Dutch financial and military resources), the Netherlands granted independence in 1949 to its most important colony, the Dutch East Indies (now known as Indonesia).
Growing prosperity and secularisation of society began to reduce the influence of the zuilen structures in the 1960s. A sharp drop in the popularity of the confessional parties resulted in a simultaneous increase in public support for the right-of-centre VVD and a number of new parties, including the centrist constitutional reform party, D66.
In 1973 the PvdA managed to edge the confessional parties out of their customary position at the heart of a coalition government by forming a left-of-centre alliance with D66 and other small parties, under the premiership of Joop den Uyl. While this government was in power, there was a rapid extension of the public sector and the social security system. Unfortunately, this coincided with the deepest economic recession since the second world war, following the first oil crisis and an excessive reliance on gas production, which left the country with a growing public debt and rising unemployment (then known as the "Dutch disease").
The era of Ruud Lubbers
After the fall of the Den Uyl cabinet in 1977, the three confessional parties amalgamated under the banner of the CDA in the subsequent elections. The CDA was able to form a centre-right coalition government with the VVD, under the premiership of the then CDA leader, Andries van Agt, and political debate during this period focused on the shortcomings of the economy.
From 1981 to 1994 the CDA dominated Dutch politics, successively leading three different coalitions, first with the PvdA and D66, then with the VVD, and finally with the PvdA again. Governments were formed under the premiership of the new chairman of the CDA, Ruud Lubbers, who was a shrewd political tactician and mediator. The three Lubbers governments were largely successful in overcoming the "Dutch disease": the public-sector deficit was reduced and wage costs held down in order to restore the competitiveness of Dutch firms. Policies focused on reining-in government expenditure and reforming expensive welfare programmes, not to undermine the welfare state, but to remove extravagance and work disincentives. |
|The Open Door Web Site|
The Breathing System of Insects
The tracheal breathing system of insects
Insects do not breathe through their mouths as we do. The do not have lungs and their blood, which is a watery, yellowish liquid, does not carry oxygen and carbon dioxide around their bodies.
Insects have a system of tubes, called tracheae, instead of lungs. These tracheae penetrate right through the insect's body. Air enters the tracheae by pores called spiracles. These spiracles are found on each side of the insect's abdomen. Each segment of the abdomen has a pair of spiracles.
The air passes into the tracheae which branch into smaller and smaller tubes, in a similar way to the bronchioles in our lungs. The tracheae finally come to an end in the tissues which are respiring. Here in the tissues the oxygen is taken from the air in the tracheae. At the same time carbon dioxide enters the tracheae so that it can be expelled from the body.
The process of breathing in insects is slow. Large, active insects, however, may pump their abdomens to help quicken the movement of these gases.
It is interesting to note that the tracheae are supported by strengthening rings, just like the tracheae in our breathing system. The strengthening rings are made of chitin, which is the same material as we find on the outside of the insect.
Scientists think that it is the breathing system of insects which keeps them so small. The insect which has the largest body is the Goliath Beetle which lives in the tropics. This beetle is only 15cm long. It is true that some butterflies and moths have wings which make them bigger, but the wings of an insect do not need to be supplied with oxygen. Most insects are less than one centimetre long.
The spiracles on the sides of the insect's body can be closed by valves. It is difficult to drown an insect because, when it is under water, it closes the valves. This prevents water entering the tracheae and, with air in its body, the insect will tend to float.
How insects ventilate
Small insects and insects which are not very active are able to rely on enough oxygen reaching their tissues through their spiracles. Active insects, however, need to speed up the movement of oxygen to their tissues. They pump their abdomens in and out, using muscles. This helps fresh air to enter the tracheae. A locust tends to move its abdomen lengthwise, making it longer or shorter. A honey bee uses a width-wise movement, making the abdomen wider or narrower. |
Have you ever asked yourself how it is possible for migrating birds to be able to fly over such long distances in order to spend one part of the year in a more favourable environment? You have probably already seen swarms of birds flying away in a V-shaped formation, which for sure has an advantage of reduced drag per individual, but this alone is not enabling them to travel thousands of kilometers.
For the theme session of FLYING, in this article I will first write about general energetical differences between different types of locomotion (flying, running, swimming). And secondly, the power demand for flying of migrating birds will be compared to non-migrating birds, including their physiological adaptations.
What is more efficient – flying, running or swimming?
The speed at which an animal can travel and the energy cost of travelling depend on its mode of locomotion and its size . Studies of metabolic power, required for different types of locomotion, showed that for animals of similar size, powered flying is the most demanding but at the same time the fastest type of locomotion compared to swimming and running. The power required for flapping flight exceeds that for running, and the power for walking is higher than for swimming [1, 2]. An important fact is that for each mode of locomotion, larger animals commonly travel faster than small ones, and they generally use less power per unit body mass .
With regard to speed and power requirement, large fliers travel faster than small ones in flapping flight but soaring is slower than flapping flight . Flapping requires much more metabolic power than soaring, swimming or running and the power required per unit body mass is higher for small fliers than for large ones. These facts reveal, why migrations of 5 000 km or more seem to be only beneficial for birds and large marine animals (e.g. seals, whales) and their physiological properties are presented hereafter.
During migrations, most bird species fly below an altitude of 1 km above ground level . However, many species have been observed to fly way above this altitude, either due to favourable wind conditions or to pass large mountain barriers. One such species is the bar-headed goose (A. indicus) which is known to migrate over the Himalaya at the altitudes of around 5500 m . At such heights, air density, partial pressure of oxygen and air temperature are very low. At sea level air contains 21% of oxygen, while at high altitudes it has less than 10% . This means that there is much less oxygen available, and lift generation becomes more difficult. Although drag is proportionately reduced, the resulting mechanical cost of flying increases by 50% .
Such species benefit from an important adaptation – they possess haeomoglobin (Hb) that has a higher affinity for oxygen than other birds . When exposed to lack of oxygen (hypoxia), lowland birds need to produce more red blood cells to keep the net amount of oxygen at a constant level. This makes blood denser and consequently increases work load of the heart. Due to their special Hb, the bar-headed geese on the other hand, are able to maintain oxygen content of their arterial blood even when exposed to hypoxia, without producing more blood cells.
Another adaptation is hyperventilation, which means increased ventilation via gas exchange surfaces of the lungs. This is made possible due to large surface area of the gas exchanger, great difference in partial pressure across the gas exchanger and a small diffusion distance across the exchanger . The maximum amount of oxygen that can be carried is, as already mentioned, related to the concentration of Hb.
Capillary density in the flight muscles also plays an important role for migrating birds. Oxygen extraction by muscles depends to a large extent on the density of blood capillaries in those muscles – the greater the capillary density, the greater the surface area for gas exchange . There are big differences in capillary densities between migrating and non-migrating birds. An extreme example is the flight muscle of rufous hummingbirds (migrating over 3500 km long distances) which has a capillary density of 7000/mm2, compared to 1600/mm2 of a non-migrating bird [7, 8].
Last but not least, the metabolic rate is a crucial parameter among the previously described physiological parameters. Migrating birds use fatty acids as their main source of energy, so they have to be transported at a sufficient rate to meet the high demand of the flight muscles. Fatty acids from lipids possess much higher amounts of energy than proteins or glycogen in the body, that is way migrating birds gain fat which is then used to fuel their journey ahead .
Powered flight energy demands
Gliding and soaring flight are energetically the least expensive, whereas any form of flying that involves and explosive burst of activity (e.g. take-off) will be energetically the most expensive. Continuous flapping has an intermediate rate of energy expenditure . The metabolic rate per unit mass is greater in smaller birds than in larger ones. For example, the flight muscles of hummingbirds are known as one of the most aerobic tissues among vertebrates. It has an estimated maximum rate of oxygen consumption of 2 ml/g min, compared with 0.87 ml/g min by the flight muscles of tufted ducks . This is actually the reason why hummingbirds must spend so much time foraging for food, compared to an albatross who can spend days soaring on the open sea. Hummingbirds can “afford” such high metabolic rate because they inhabit areas with plenty of food sources close to each other and can quickly replenish energy loss. And exactly the metabolic rate is what is limiting different bird species to travel large distances while foraging for food.
To conclude, flying is an expensive way of transport, looking from the energetical point of view, but at the same time the fastest way of locomotion. Many morphological and behavioural features of birds have evolved which reduce the energy cost of flight. Some examples are wing shape, the use of thermal winds, orographic lift and V-formation flight . Even though migrating birds normally travel along the most energy-efficiently routes available, their physiological adaptations still play the most important role in providing a continual supply of oxygen and metabolic substrates to the muscles.
- Alexander R. McN. 1998 When is migration worthwhile for animals that walk, swim or fly? J. Avian Biol, 29, 387-394.
- Alexander R. McN.2002 The merits and implications of travel by swimming, flight and running for animals of different sizes. Integrative and Comparative Biology, 42, 1060-1064
- Baker R. R. 1978 The evolutionary ecology of animal migration. Hodder and Stoughton, London.
- Matthieu O, Krauer R, Hoppeler H, Gehr P, Lindstedt SL, Alexander RMcN, Taylor CR, Weibel ER. 1981. Design of the mammalian respiratory system. VII. Scaling mitochondrial volume in skeletal muscle to body mass. Respir. Physiol. 44, 113–128.
- Butler PJ. 2010. High fliers: The physiology of bar-headed geese. Comp. Biochem. Physiol. A. 156, 325–329.
- Butler P.J. 2016. They physiological basis of bird flight. Philos. Trans. R.Soc. Lond.Biol.Sci. 371
- Lundgren BO, Keissling K-H. 1988. Comparative aspects of fibre types, areas and capillary supply in the pectoralis muscle of some passerine birds with different migratory behaviour. J. Comp. Physiol. B 138, 165–173
- Mathieu-Costello O, Suarez RK, Hochachka PW. 1992. Capillary-to-fiber geometry and mitochondrial density in hummingbird flight muscle. Respir. Physiol. 89, 113–132.
- Bishop CM, Butler PJ. 2015. Flight. In Sturkie’s avian physiology (ed. Scanes CG, editor. ), pp. 919–974, 6th edn New York, NY: Academic Press. |
Johannes Gensfleisch zur Laden zum Gutenberg (c. 1398 – c. February 3, 1468) was a German goldsmith and inventor who achieved fame for his invention of the technology of printing with movable types during 1447. This technology included a type metal alloy and oil-based inks, a mould for casting type accurately, and a new kind of printing press based on presses used in wine-making in the Rhineland.
The exact origin of Gutenberg's first press is apparently unknown, and several authors cite his earliest presses as adaptations of heavier binding presses which were already in use. Tradition credits him with inventing movable type in Europe—an improvement on the block printing already in use there. By combining these elements into a production system, he allowed for the rapid printing of written materials, and an information explosion in Renaissance Europe. An iron printing press was first invented by Chae Yun-eui from Goryeo Dynasty (an ancient Korean nation, and also, the origin of the name "Korea") in 1234, over two hundred years ahead of Gutenberg's feat, and the first movable type was invented by Chinese Bi Sheng between 1041 to 1048 C.E.
Gutenberg has often been credited as being the most influential and important person of all-time, with his invention occupying similar status. The A&E Network ranked him as such on their "People of the Millennium" countdown in 1999. Certainly, his invention earns him the distinction of being in the company of one of a relatively small number of women and men who changed history. Books no longer had to be hand written. Instead of only a privileged few having access to libraries, themselves scarce, any literate person could now seek to acquire knowledge. Without the printing press, universal education or education on a much larger scale would not have developed.
As more people gained an education, more accounts of events became available filtered through different perspectives, thus changing historical reconstruction itself. The Protestant Reformation stood on Gutenberg’s shoulders, since it largely depended on the availability, in vernacular languages, of the Bible so that people could read the scriptures for themselves and thus critique official interpretations that empowered the clergy and disempowered the laity. Gutenberg's famous Bible was the Latin Vulgate but it was not long before vernacular editions followed such as the first German Bible in 1466, the first Dutch bible (1477) and the first English New Testament, translated by William Tyndale in 1539. Martin Luther's appeared in 1534.
As a result of Gutenberg's invention, the world became much more interconnected, ideals about human dignity and rights and universal values spread enabling, in the twentieth century, the development of a global structure such as the United Nations and of humanitarian and international law.
Gutenberg was born in the German city of Mainz, as the son of a merchant named Friele Gensfleisch zur Laden, who adopted the surname "zum Gutenberg" after the name of the neighborhood where the family had moved. Gutenberg was born from a wealthy patrician family, who dated their lines of lineage back to the thirteenth century. Gutenberg's parents were goldsmiths and coin minters.
Block printing, whereby individual sheets of paper were pressed into wooden blocks with the text and illustrations carved into them, was first recorded in Chinese history, and was in use in East Asia long before Gutenberg. By the twelfth and thirteenth centuries, many Chinese libraries contained tens of thousands of printed books. The Chinese and Koreans knew about movable metal type at the time, but because of the complexity of the movable type printing it was not as widely used as in Renaissance Europe.
It is not clear whether Gutenberg knew of these existing techniques, or invented them independently, although the former is considered unlikely because of the substantial differences in technique. Some also claim that the Dutchman Laurens Janszoon Coster was the first European to invent movable type.
Gutenberg certainly introduced efficient methods into book production, leading to a boom in the production of texts in Europe—in large part, owing to the popularity of the Gutenberg Bibles, the first mass-produced work, starting on February 23, 1455. Even so, Gutenberg was a poor businessman, and made little money from his printing system.
Gutenberg began experimenting with metal typography after he had moved from his native town of Mainz to Strasbourg (then in Germany, now France) around 1430. Knowing that wood-block type involved a great deal of time and expense to reproduce, because it had to be hand-carved, Gutenberg concluded that metal type could be reproduced much more quickly once a single mould had been fashioned.
In 2004, Italian professor Bruno Fabbiani (from Turin Polytechnic) claimed that examination of the 42-line Bible revealed an overlapping of letters, suggesting that Gutenberg did not in fact use moveable type (individual cast characters) but rather used whole plates made from a system somewhat like our modern typewriters, whereby the letters were stamped into the plate and printed much as a woodcut would have been. Fabbiani created 30 experiments to demonstrate his claim at the Festival of Science in Genoa, but the theory inspired a great deal of consternation amongst scholars who boycotted the session and dismissed it as a stunt. James Clough later published an article in the Italian magazine Graphicus, which refuted the claims made by Fabbiani.
In 1455, Gutenberg demonstrated the power of the printing press by selling copies of a two-volume Bible (Biblia Sacra) for 300 florins each. This was the equivalent of approximately three years' wages for an average clerk, but it was significantly cheaper than a handwritten Bible that could take a single monk 20 years to transcribe.
The one copy of the Biblia Sacra dated 1455 went to Paris, and was dated by the binder. As of 2003, the Gutenberg Bible census includes 11 complete copies vellum, one copy of the New Testament only on vellum, 48 substantially complete integral copies on paper, with another divided copy on paper, and an illuminated page (the Bagford fragment). The Gutenberg Bibles surviving today are sometimes called the oldest surviving books printed with movable type, although the oldest such surviving book is the Jikji, published in Korea in 1377. However, it is still notable, in that the print technology that produced the Gutenberg Bible marks the beginning of a cultural revolution unlike any that followed the development of print culture in Asia.
The Gutenberg Bible lacks many print features that modern readers are accustomed to, such as pagination, word spacing, indentations, and paragraph breaks.
The Bible was not Gutenberg's first printed work, for he produced approximately two dozen editions of Ars Minor, a portion of Aelius Donatus’s schoolbook on Latin grammar. The first edition is believed to have been printed between 1451 and 1452.
Johann Fust extended Gutenberg eight hundred guilders, at the beginning of their partnership in 1436, to allow him to carry out his work. The money Gutenberg earned at the fair was not enough to repay Fust for his investments, which eventually exceeded two thousand guilders. Fust sued, and the court’s ruling not only effectively bankrupted Gutenberg, but it awarded control of the type used in his Bible, plus much of the printing equipment, to Fust. So, while Gutenberg ran a print shop until shortly before his death in Mainz in 1468, Fust became the first printer to publish a book with his name on it.
Gutenberg was subsidized by the archbishop of Mainz until his death. Gutenberg was known to spend what little money he had on alcohol, so the archbishop arranged for him to be paid in food and lodging, instead of coin.
Although Gutenberg was financially unsuccessful in his lifetime, his invention spread quickly, and news and books began to travel across Europe much faster than before. It fed the growing Renaissance, and since it greatly facilitated scientific publishing, it was a major catalyst for the later scientific revolution. The ability to produce many copies of a new book, and the appearance of Greek and Latin works in printed form was a major factor in the Reformation. Literacy also increased dramatically as a result. Gutenberg's inventions are sometimes considered the turning point from the Middle Ages to the Early Modern Period.
The term incunabulum refers to any western printed book produced between the first work of Gutenberg and the end of the year 1500.
There are many statues of Gutenberg in Germany; one of the more famous being a work by Bertel Thorvaldsen, in Mainz, home to the Gutenberg Museum.
The Johannes Gutenberg-University in Mainz is named in his honor.
The Gutenberg Galaxy and Project Gutenberg also commemorate Gutenberg's name.
All links retrieved January 23, 2014.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
In Citizen Science, students play a teen concerned about a local lake threatened by eutrophication, a condition in which a body of water becomes starved of oxygen. Set in Madison, WI, the game brings players back in time to the 1960s, where they must uncover and solve specific pollution problems faced by the lake. Returning to the present and finding that the lake continues to face eutrophication threats, players take corrective steps to solve more recent problems. Citizen Science serves several key pedagogical functions: First, through continual evidence-based argumentation, players master the art of crafting reasoned arguments as a way to persuade others using established facts. Second, through models and simulations of scientific data collection, players learn about the factors that influence lake health, including exotic species introduction; manure and fertilizer runoff; fishing regulation; and wetland restoration. Finally, players gain an understanding of how citizens can make positive changes to their communities. Note that while the full game may take up to 90 minutes to play, a number of teaching goals can be met in considerably less time.
In this lesson plan which can be adapted for students in grades 5-12, students use a free online science and English Language Arts (ELA) game to explore personification in writing. See more »
In this multi-day lesson plan which is adaptable for grades 5-12, students use BrainPOP resources to practice crafting reasoned arguments and explore the effect of humans on the environment. Through an online game, students will learn about the causes of water pollution in a lake and pose a question about the local water supply to community residents. Students then compile the residents' opinions during game play and compose a persuasive letter to their congressional representative asking for his or her support in improving water conditions. See more » |
Bias is hard to define, but the following is a framework for understanding what bias is and it may help someone decide if they are a target or witness of bias.
- Any physical, spoken or written act of abuse
- Making remarks of a personally destructive nature toward any other person
- Any restriction or prevention of free movement of an individual
Bias occurs whether the act is:
- Intentional or unintentional
- Directed toward an individual or a group
A bias-based incident is one which has a negative effect on an individual or group and is based on or motivated by bias against race, color, creed, nationality , sexual orientation, gender, physical or mental disability, political or religious ideology, age, or any other distinguishing characteristic.
The incident is experienced as hurtful by one or many and may involve harassment, the creation of a hostile environment, property damage, verbal threats of violence, or physical violence. The incident may or may not involve breaches of University policies or state or federal law.
Bias Incident vs. Hate Crime
The above description may make someone think of the term “Hate Crime”. However, these two terms are not the same. What distinguishes the two is the legality of the action.
For example, degrading someone because they are a person of color is a hate crime.
If someone is harassed or teased because of a disability but not to the point of violating a law, it is bias incident. As soon as the action crosses the line of violating a law, it may be defined as a hate crime.
The Bias Response Team understands that distinguishing whether something is a bias incident or a hate crime may be difficult. If you have a feeling, or just want to report the incident to be sure, feel free to make a report about what happened, and we can help you from there. Our staff can examine a situation and help a reporter decide what to do next. |
Until recently, techniques for measuring birefringence have hardly altered with observations being made using standard polarising microscopes with white light and crossed polars, which can lead to coloured interference patterns in the sample image. To avoid this, the new Metripol technique uses a monochromatic light together with a rotating polariser and a circular analyser to carry out both qualitative and quantitative measurements of transparent microscopic specimens. it has useful applications in many areas, including the analysis of strain in industrial diamonds, phase transitions in crystals, and the analysis of collagen and hydroxyapatite distribution in bone.
What is Birefringence?
The phenomenon of birefringence (also known as double refraction) is a result of optical anisotropy, and can be described as the difference between two refractive indices for a given light beam, depending on the orientation of the polarisation of the incoming light. Birefiringence is displayed by a broad range of materials, including all crystals (except those of cubic symmetry), liquid crystals, glass and plastics subjected to mechanical strain. it also occurs in materials in which the underlying crystal structure may have its atoms distributed in such a way that it forms an anisotropic structure causing optical anisotropy.
How Is Birefringence Observed in a Normal Polarising Microscope?
The birefringence colours seen in a normal polarising microscope arise from a combination of three effects - intensity distribution of the light through the sample, the magnitude of the birefringence and the orientation of the indicatrix (φ). To get an exact image of the birefringence and determine its value, it is necessary to rotate the sample into a number of positions and determine the magnitude of the birefringence by inserting different compensating crystalline plates to effectively cancel the birefringence of the sample. For complex samples with numerous orientations this approach is very time consuming.
The Metripol Technique
Work carried out by Professor Mike Glazer at Oxford University, UK, into the relationship between crystal structure and physical properties, developed an entirely different approach to birefringence imaging. The research group wanted to create a system that could image, analyse and most importantly quantify birefringence. A collaboration with Oxford Cryosystems, UK, resulted in the Metripol emerging as a commercial analytical technique.
How Does the Metripol Technique Work?
The Metripol microscope produces quantitative birefringence data of samples in the form of images within a matter of seconds. The system incorporates a modified polarising microscope, rotating polariser, wideband adjustable wave plate and polariser (circular analyser). An integrated software suite is used to control the measurement process and for analysing the resulting images.
Monochromatic light is used and images are collected using a CCD camera at different angles as the polariser rotates, which makes it possible to separate out the birefringence, orientation and transparency that are normally superimposed in conventional polarising microscopy. The software then generates three separate images, in which false colour is used to denote these components separately. The software also allows histograms and profiles through the images to be produced. Average values can be reported from selected regions of the images, so that the progress of a particular quantity at any place in the image can be studied - as a function of temperature, for example.
The Multifile Facility
A multifile facility allows the creation of a sequence of images that can be scanned through, in order to plot sequential values on a graph or to make an AVI video file of a process. Multifile creation can be automatically phased with an external source, such as a heating or cooling stage.
The Metripol system has already been used for a broad range of applications, from analysing collagen distribution in bone to studying optical properties of minerals, illustrating its versatility.
Strain Analysis in Diamonds
As a cubic crystal, diamond is not normally birefringent, and is optically isotropic. However, strain in diamond introduced by impurities, restrictions during growth or applied stress, makes the structure anisotropic and results in birefringence.
Sources of Strain in Industrial Diamonds
Producing diamond industrially using HPHT and CVD can cause the introduction of mechanical strain, which can significantly alter its physical properties causing twinning, crystal defects and weakness. However, in some situations the introduction of strain can actually strengthen the diamond, making it less susceptible to cleavage. Here, the Metripol technique has been used to generate both qualitative and quantitative information on different forms of diamond, including artificially grown diamond with nitrogen impurities, thin-film (CVD) diamond and diamond gemstone.
Analysing Strain in Industrial Diamonds
In the artificially-grown diamond shown in figure 1a, nitrogen has accumulated during the growth of the diamond. The diamond is grown as (111) plates (right, lower left, upper left) from a tiny seed (in the centre). At the sector interfaces there is a slight crystal-lattice misalignment, which allows for the incorporation of nitrogen. The misalignment, together with the presence of nitrogen, causes a build-up of strain leading to optical anisotropy. In the transmission image, the contrast in the centre is caused by absorption of light by the nitrogen. The birefringence image, figure 2b, indicates the magnitude of the strain in different parts of the diamond, with the purple colour representing the lowest values. The colours in the orientation image, figure 1c, indicate the orientation of one of the indicatrix axes and show that this axis points towards the centre of the diamond at all places.
Figure 1. (a) Transmission image showing absorption at the centre caused by nitrogen. (b) Birefringence caused by strain associated with growth boundaries. (c) Orientation image, with strain orientations being marked by short lines in addition to the colour scale.
Phase Transition Studies
The study of phase transitions is an important field of materials science, not only in its own right, but also for its importance in industrial applications. The Metripol was developed specifically for the purpose of following phase transitions, and as a result is perfectly suited to this application. Using the Metripol, crystallographic twins can be identified and, in some cases, the symmetry relating them can be determined together with the orientation of domain walls separating them.
When a crystal is accurately cut and placed on a heating stage it is often possible to identify which phase the crystal is in, simply by determining the number of different twin domains visible in the crystal and the orientation of the indicatrix in each domain. Phase transitions can be accurately determined by the appearance of twin boundaries or from analysis of the change in birefringence as a function of varying temperature.
Phase Transition in Sodium Bismuth Titanate
The phase transition studies shown here have been carried out on Na0.5Bi0.5TiO3 (NBT) crystals, figure 2. Figure 2a illustrates the pure cubic phase 1 at 590°C showing low birefringence with random orientation. At 548°C, figure 2b, the tetragonal phase II begins to appear, especially in the orientation image. Figure 2c shows the pure tetragonal phase II ‑ notice the different birefringence in the central part of the cross feature. At 196°C, figure 2d, the rhombohedral-tetragonal coexistence region starts to form, and the rhombohedral phase III starts to appear in the orientation image. The end of rhombohedral-tetragonal coexistence occurs at 151°C, figure 2e. At this stage, the tetragonal phase II has nearly disappeared, and is only distinguishable in the orientation image. At 31°C, figure 2f, the pure rhombohedral phase III is visible, while residual signs of the tetragonal twin structure still remain in the orientation image.
Figure 2. Phase transition studies using Metripol were carried out on Na0.5Bi0.5TiO3 (NBT) crystals.
Other Areas of Use for the Metripol System
The Metripol system has been adopted by research groups in fields as diverse as diamond research, the study of polymorphic pharmaceuticals, engineering and biology. One of the most exciting developments has been in protein crystallography, in which research in collaboration with oxford and Warwick universities is starting to suggest that the Metripol may be a useful tool in determining crystal quality and early crystalline growth. Other exciting applications exist in earth sciences as an aid to identifying minerals in rock sections.
Qualitative and quantitative imaging of birefringence in transparent materials can add extremely valuable data to a broad range of studies in the fields of drug discovery, materials science and biological research. The Metripol imaging system offers an elegant solution to the problem of gathering such data. |
How to Find the Angle Between Two Vectors
This itself isn't calculus, but vectors are an important part of multivariable calculus, and I couldn't think of a better place to post this, so it's here.
Before we begin, we must define what a dot product is. The dot product is the sum of the products of the corresponding parts. For example, the dot product of <3, 4> and <5, 6> (written as <3,4>·<5,6>) is 3*5 + 4*6 = 39. When the dot product is 0, the vectors are perpendicular. We'll show this later.
This has numerous properties including the distributive property a·(b + c) = a·b + a·c, the reflexive property (a·b = b·a), and that ||a||^2 (the magnitude of a squared) = a·a.
On the left is a drawing of the vectors a, b, and a vector adjoining them, a-b. On the right, we have a corresponding triangle, with each side being the length of the vector, thus being completely equivalent to the triangle on the left. By the law of cosines:
Once you have that equation, you're able to solve any problem regarding the angle between two vectors by noting the relationship between the dot product and cosine. Thus, finding the angle between two vectors only involves taking the dot product, dividing, and taking the arccos of that. |
Drawing conclusions for 24 Total Questions! 12 Task Cards with questions that ask students to draw conclusions (make inferences) and support the conclusion with text details and evidence. Wording directly aligned with STAAR Reading Test and other state tests. The 12 task cards are all challenging expository texts each with two questions.
Drawing conclusions can be very difficult to teach. It is a type of inference that requires readers to comprehend the details, select and connect the details, then decide how those connections support a common idea.
Cards 1-10 are included at a discounted price in the 40 QUESTION Bundle Set #1 Conclusions and Main Ideas
Drawing Conclusions Task Cards are great for stations, group practice, small group, warm-ups, or individual practice!
Texts are written above grade, and questions are written for 3rd, 4th, and 5th grade students.
ATOS Level: 6.7
Question Stems include:
• What conclusion can the reader draw?
• What conclusion is best supported by the detail in the paragraph?
• Which sentence from the paragraph supports the conclusion that Steve was successful in school?
• The details in the paragraph help the reader conclude –
• The sentence [quote from paragraph] helps the reader conclude -
Perfect for 3rd grade, 4th grade, and below-level 5th grade.
Other drawing conclusions task cards:
Drawing Conclusions Task Cards #1 STAAR Prep
Drawing Conclusions Task Cards #2 Harriet Tubman
Drawing Conclusions Task Cards #3 Informational Texts
Drawing Conclusions Task Cards #4 Properties of Matter
Drawing Conclusions Task Cards #5 Matter
Drawing Conclusions Task Cards #6 Energy
Drawing Conclusions Task Cards #7 Rosa Parks
Drawing Conclusions Task Cards #8 Martin Luther King Jr.
Drawing Conclusions Task Cards #9 Jackie Robison
Other Great Reading Resources:
Summarizing Task Cards
Summarizing Lesson Bundle
Cause and Effect Task Cards
Main Ideas and Details Lesson Bundle |
There is no best way to assess. Assessment has to be developed and tailored by each academic program to address the program’s own unique goals.
Assessment can use a variety of methods to measure student performance but some traditional direct ways include:
- Multiple choice exams that can determine whether students show mastery of important content
- Papers that can be evaluated by faculty using criteria or rubrics to determine if students demonstrate certain critical thinking or writing skills or mastery of content
- Portfolios of a student’s best performance on exams, papers, art projects or music compositions (for example), across a program’s curriculum. Portfolios are evaluated by faculty also using set criteria (developed by faculty) to determine whether students are demonstrating skills that meet a program's goals.
- Standardized assessment instrument that focuses on the skills or content students are expected to master. An ETS Major Field Test is one example. Advantages of standardized exams are that they usually have national norms and are valued within their respective disciplines.
Assessment can be embedded within a classroom. A task like an assignment, paper or exam (for example) that can be evaluated for addressing a program goal can be used. This type of assessment makes the process easy and flows with the normal routine of the curriculum.
Direct versus indirect assessment. The methods described above are direct methods of assessing student learning. That is, student performance is directly measured using techniques that evaluate student work or knowledge. This is the most direct way to determine whether students are walking away with the skills and knowledge intended by a program.
Sometimes indirect measures of student performance are used, such as a questionnaire that asks for students’ subjective perceptions of their own performance. This may provide useful information for a program but may or MAY NOT actually correspond with actual student performance. |
While all outdoor perennial plants--those that live three or more growing seasons--live year round, many are deciduous, losing their foliage and going dormant over the winter. Reduced to their "skeletons," they don't look alive to many, so when people talk about plants that live during all the seasons, they usually mean evergreens. Evergreens include both needled plants and broad-leaved plants. These can provide flowers, fruit, color and screening through the bare winter season, regardless of your planting zone.
Rhododendron is a large genus, usually evergreen, with plants that can live in many climates. Azaleas are rhododendrons. The Lapland rhododendron can live in USDA hardiness zone 1, which has winter temperatures below 50 degrees Fahrenheit. (Zones change by 10-degree increments.) Besides producing flowers in spring and summer, the rhododendron has an interesting reaction to cold, drooping and curling its leaves as if turning inward.
Junipers, which come in different shapes, sizes and colors, can be used as a screens, including during winter. Junipers tend to be shrubby, for instance, the common juniper is only 5 to 10 feet high. Other junipers are trees, though, like the eastern red cedar that grows up to 50 feet high and 20 feet wide. Varieties of junipers can live in zone 3 all the way down to the warm lands of Florida.
Mountain laurels can live in the northern climes of zone 4 down to the milder weather of zone 9. They are broadleaf evergreens that grow 3 to 5 feet with dark green leaves that are maroon by the time the spring melts the snow off them. They like soil on the acidic side (below pH 7). Some varieties produce flowers in spring.
Hollies, which grow in zones 5 to 9, can be used as specimen plants, which are meant to draw attention to themselves, as foundation plants and as hedges and screens. The berries appear in fall and winter, feeding wildlife while providing color. Some hollies grow up to 40 feet; those for hedges, about 4 feet. Berries are not always red. Hollies can fruit in yellow, orange and even black.
Wax myrtle, which is also spelled as one word, is also called Southern bayberry or candleberry because the leaves have an aroma used to give scent to candles.The trees, which grow from zone 4 to 10, produce clusters of small fruit used as food by wildlife through the winter. Some birds can even eat the wax on the leaves.
Camellias only grow in southern zones, where they bloom during the winter. The flowers bloom up to 4 inches wide in white, pink, yellow and red. The leaves are glossy and thick, growing on shrubs or small trees. Camellias are very pest prone, but growers love their beautiful flowers. |
When Hurricane Harvey swept through our areas with high winds and extreme amounts of rainfall, many of us just waited until it was over. However, the devastation continued with continued tornado warnings and watches for more than a day after each storm surge.
Tornadoes often accompany hurricanes that hit land over long periods of time. Hurricane Harvey in Texas and Hurricane Irma in Florida are examples of this type of hurricane. Tornadoes are created by opposite directional winds. Cold air drops, warm air rises, and they form a spiral.
Tropical storms such as hurricanes create unstable vertical and horizontal winds. As the wind direction changes and hits the upward force, a small storm cell is created. These storm cells don’t usually have lightning or thunder contained within them, so they slip under the radar. As a result, they can be dangerous as they quickly transition from strong winds to a tornado as they hit land’s wind resistance.
Tornadoes may have higher wind speeds than hurricanes, but their time of impact is shorter. About 100 tornadoes drop on the United States each year.
Be careful if heavy winds and rains persist, and be sure to tune in to your local weather station and check social media for updates. |
Ebola Research Papers
Ebola research papers discuss the emergence of this disease in the world today. Paper Masters has researchers that write on Ebola and other medical health diseases.
Ebola is a very destructive disease that is caused by an ebola virus There are a myriad of symptoms that ultimately lead to the death of the carrier if decisive action is not taken. Fruit bats are believed to be the natural carriers of the virus, able to pass it on to humans and other mammals without being affected themselves. Outbreaks have been numerous and frequent. "Since the Ebola Virus was first identified almost 40 years ago, the World Health Organization reports 24 previous outbreaks of Ebola and its subtypes". Ebola is concentrated in Africa and the first recorded death from the disease happened there.
Typically when a person is affected by the ebola virus, it takes up to a couple of weeks before symptoms start showing themselves. Treating Ebola at this early stage is imperative but difficult. In Emergency: Mass Casualty: Ebola, Anna Easter explains why: "Early diagnosis is difficult because the presenting symptoms include those evident in differential diagnoses such as influenza and sepsis. Differential diagnosis is time consuming and extremely costly".
The beginning symptoms of Ebola include:
- Sore throat
- Muscle pain
Vomiting and diarrhea follow a short while after, and the person's liver and kidneys begin to shut down. If the person dies, this typically happens between one to two weeks of the first symptoms showing themselves.
Ebola can be transmitted with the exchange of any bodily fluids with anyone who is affected already. Contact in general is also not recommended, especially with a recently deceased patient. Luckily it is fairly difficult to catch the disease as long as the proper steps are taken to prevent transmission. Historically those most susceptible are caregivers treating people who already have Ebola. Preventing transmission can be as easy as washing regularly with soap and water after coming into contact with someone affected.
One of the issues with Ebola is that there is currently no specific treatment for the disease; however, there are measures that can be taken to increase the chances of survival. These measures include re-hydration as well as regular treatment of the various symptoms. In the developed world, more intensive care is used and can be attributed to the lack of outbreaks in these areas. These treatments include maintaining blood volume and salt balance as well as getting rid of infection as soon as it starts.
Africa is typically where the disease occurs in outbreaks, with the largest outbreak in history going on currently. In 1976, the first recorded outbreak occurred in Sudan. It has since spread out from there, although the continent of Africa remains the area with the most concentrated amount of occurrences. |
Traditional qamutik (sled), Cape Dorset
|Regions with significant populations|
|Alaska, Greenland, Northwest Territories, Nunatsiavut, Nunavik, Nunavut, Russian Far East.|
|Related ethnic groups|
|This article is part of a series on|
|First Nations · Inuit · Métis|
Inuit (plural; the singular Inuk means "man" or "person") is a general term for a group of culturally similar indigenous peoples inhabiting the Arctic regions of Canada, Denmark, Russia and the United States. The Inuit language is grouped under Eskimo-Aleut languages.
The Inuit live throughout most of the Canadian Arctic and subarctic: in the territory of Nunavut ("our land"); the northern third of Quebec, in an area called Nunavik ("place to live"); the coastal region of Labrador, in an area called Nunatsiavut ("our beautiful land"); in various parts of the Northwest Territories, mainly on the coast of the Arctic Ocean and formerly in the Yukon. Collectively these areas are known as Inuit Nunangat. In the US, Alaskan Inupiat live on the North Slope of Alaska and Siberian Coast, Little Diomede Island and Big Diomede Island. Greenland's Kalaallit are citizens of Denmark. The Yupik live in both Alaska and the Russian Far East.
In Alaska, the term Eskimo is commonly used, because it includes both Yupik and Inupiat, while Inuit is not accepted as a collective term or even specifically used for Inupiat. No universal replacement term for Eskimo, inclusive of all Inuit and Yupik people, is accepted across the geographical area inhabited by the Inuit and Yupik peoples. In Canada and Greenland, the term Eskimo has fallen out of favour, as it is considered pejorative by the natives and has been replaced by the term Inuit. In Canada, the Constitution Act of 1982, sections 25 and 35 recognised the Inuit as a distinctive group of Canadian aboriginals, who are neither First Nations nor Métis.
The Inuit are the descendants of what anthropologists call the Thule culture, who emerged from western Alaska around 1000 AD and spread eastwards across the Arctic, displacing the related Dorsets, the last major Paleo-Eskimo culture (in Inuktitut, the Tuniit). Inuit legends speak of the Tuniit as "giants", although they were sometimes called "dwarfs", people who were taller and stronger than the Inuit. Researchers believe that the Dorset culture lacked dogs, larger weapons and other technologies that gave the expanding Inuit society an advantage. By 1300, the Inuit had settled in west Greenland, and they moved into east Greenland over the following century.
Faced with population pressures from the Thule and other surrounding groups such as the Algonquian and Siouan the Tuniit gradually receded, and were thought to have become completely extinct by about 1400 AD. However, in the mid 1950s researcher Henry B. Collins determined that, based on the ruins found at Native Point, the Sadlermiut were likely the last remnants of the Dorset culture. The Sadlermiut population survived up until winter 1902-03, where exposure to new diseases brought by contact with Europeans led to their extinction. More recent mitochondrial DNA research has supported the continuity between the Sadlermiut and the Tuniit, and also provided evidence that a population displacement did not occur within the Aleutian Islands between the Dorset and Thule transition. In contrast to other Tuniit populations, the Aleut and Sadlermiut benefited from both geographical isolation and the ability to adopt certain Thule technologies.
In Canada and Greenland, the Inuit circulated almost exclusively north of the "Arctic tree line", the de facto southern border of Inuit society; to the south, Native American cultures were well established. The culture and technology of Inuit society that served them so well in the Arctic were not suited to subarctic regions, so they did not displace their southern neighbours.
The Inuit had trade relations with more southern cultures; boundary disputes were common and gave rise to aggressive actions. Warfare, in general, was not uncommon among those Inuit groups with sufficient population density. Inuit, such as the Nunatamiut (Uummarmiut) who inhabited the Mackenzie River delta area, often engaged in warfare, whereas the Central Arctic Inuit lacked the population density to do so.
The first European contacts were with the Vikings who settled in Greenland and explored the eastern Canadian coast. Their Norse literature noted skrælingar, most likely an undifferentiated label for all the native peoples of the Americas whom the Norse contacted: Tuniit, Inuit and Beothuks alike.
Sometime in the 13th century, the Thule culture began arriving in the area from what is now Canada. Norse accounts are scant, however, Norse-made items have been found at Inuit campsites in Greenland. It is unclear whether they were there as the result of trade or plunder. One old account speaks of "small people" with whom the Norsemen fought. Ívar Bárðarson's 14th-century account noted that the western settlement, one of the two Norse settlements, had been taken over by the skrælings. The reason why the Norse settlements failed is unclear, but the last record of them is from 1408, roughly the same period as the earliest Inuit settlements in east Greenland.
After about 1350, the climate grew colder during the period known as the Little Ice Age and the Inuit were forced to abandon hunting and whaling sites in the high Arctic. Bowhead whaling disappeared in Canada and Greenland and the Inuit had to subsist on a much poorer diet and lost access to essential raw materials for the tools and architecture derived from whaling. Alaskan natives were, however, able to continue their whaling activities.
The changing climate forced the Inuit to work their way south, pressuring them into marginal niches along the edges of the tree line which Native Americans had not occupied, or where they were weak enough for coexistence. It is difficult for researchers to define when the Inuit stopped territorial expansion but there is evidence that they were still moving into new territory in southern Labrador when they first began to interact with colonial North Americans in the 17th century.
In Canada and Greenland, the term Eskimo has fallen out of favour, as it is considered pejorative by the natives, and has been replaced by the term Inuit. However, while Inuit describes all of the Eskimo peoples in Canada and Greenland, that is not true in Alaska and Siberia.
In Alaska, the term Eskimo is commonly used, because it includes both Yupik and Inupiat, while Inuit is not accepted as a collective term or even specifically used for Inupiat (which technically is Inuit). No universal replacement term for Eskimo, inclusive of all Inuit and Yupik people, is accepted across the geographical area inhabited by the Inuit and Yupik peoples.
The Inuit Circumpolar Council, a United Nations-recognised non-governmental organization (NGO), defines its constituency to include Canada's Inuit and Inuvialuit, Greenland's Kalaallit Inuit, Alaskans Inupiat and Yup'ik people, and the Siberian Yupik people of Russia. But, the Yupik of Alaska and Siberia do not consider themselves Inuit, and ethnographers agree they are a distinct people. They prefer to be called Yup'ik, Yupiit, or Eskimo. The Yupik languages are linguistically distinct from the Inuit languages.
Canada's Constitution Act, 1982 recognised the Inuit as Aboriginal peoples in Canada, which also include First Nations and Métis peoples. The Inuit should not be confused with the Innu, a distinct First Nations people who live in northeastern Quebec and Labrador.
The Inuit speak chiefly one of the traditional Inuit languages or dialects, sometimes grouped under the term Inuktitut, but they may also speak the predominant language of the country in which they reside. Inuktitut is mainly spoken in Nunavut and, as the Greenlandic language, in some parts of Greenland.
Some of the Inuit dialects were recorded in the 18th century. Until the latter half of the 20th century, most Inuit were not able to read and write in their own language. In the 1760s, Moravian missionaries arrived in Greenland, where they contributed to the development of a written system of language called Qaliujaaqpait, based on the Latin alphabet. The missionaries later brought this system to Labrador, from which it eventually spread as far as Alaska.
The Inuktitut syllabary used in Canada is based on the Cree syllabary devised by the missionary James Evans and was developed by Edmund Peck. The present form of the syllabary for Canadian Inuktitut was adopted by the Inuit Cultural Institute in Canada in the 1970s. The Inuit in Alaska, the Inuvialuit, Inuvialuktun speakers, and Inuit in Greenland and Labrador use the Roman alphabet, although it has been adapted for their use in different ways.
Though conventionally called a syllabary, the writing system has been classified by some observers as an abugida, since syllables starting with the same consonant have related glyphs rather than unrelated ones. All of the characters needed for the Inuktitut syllabary are available in the Unicode character repertoire. (See Canadian Aboriginal syllabics character table.) The territorial government of Nunavut, has developed a TrueType font called Pigiarniq for computer displays, designed by Vancouver-based Tiro Typeworks.
The Inuit language is written in several different ways, depending on the dialect and region, but also on historical and political factors. In Greenland during the 1760s, Moravian missionaries intending to introduce Inuit to Christianity through the Bible contributed to the development of an Inuktitut writing system that was based on Roman orthography. When they travelled to Labrador in the 1800s, they brought the written Inuktitut with them. The Roman alphabet-writing scheme is distinguished by its inclusion of the letter kra. The Alaskan Yupik and Inupiat, and the Siberian Yupik also adopted the system of Roman orthography. In addition, the Alaskan peoples developed their own system of hieroglyphics.
Eastern Canadian Inuit were the last to adopt the written word when, in the 1860s, missionaries imported the written system Qaniujaaqpait they had developed in their efforts to convert the Cree to Christianity. The last Inuit introduced to missionaries and writing were the Netsilik Inuit in Kugaaruk and north Baffin Island. The Netsilik adopted Qaniujaaqpait by the 1920s.
The "Greenlandic" system has been substantially reformed in recent years, making Labrador writing unique to Nunatsiavummiutut. Most Inuktitut in Nunavut and Nunavik is written using a scheme called Qaniujaaqpait, or Inuktitut syllabics, based on Canadian Aboriginal syllabics. The western part of Nunavut and the Northwest Territories use a Roman orthography (alphabet scheme) usually identified as Inuinnaqtun or Qaliujaaqpait, reflecting the predispositions of the missionaries who reached this area in the late 19th century and early 20th century
The Inuit have traditionally been hunters and fishers. They still hunt whales, walrus, caribou, seal, polar bears, muskoxen, birds, and at times other less commonly eaten animals such as the Arctic Fox. The typical Inuit diet is high in protein and very high in fat - in their traditional diets, Inuit consumed an average of 75% of their daily energy intake from fat. While it is not possible to cultivate plants for food in the Arctic the Inuit have traditionally gathered those that are naturally available. Grasses, tubers, roots, stems, berries, and seaweed (kuanniq or edible seaweed) were collected and preserved depending on the season and the location.
In the 1920s anthropologist Vilhjalmur Stefansson lived with and studied a group of Inuit. The study focused on the fact that the Inuit's extremely low-carbohydrate diet had no adverse effects on their health, nor indeed, Stefansson's own health. Stefansson (1946) also observed that the Inuit were able to get the necessary vitamins they needed from their traditional winter diet, which did not contain any plant matter. In particular, he found that adequate vitamin C could be obtained from items in their traditional diet of raw meat such as Ringed Seal liver and whale skin (muktuk). While there was considerable scepticism when he reported these findings, they have been borne out in recent studies.
The natives hunted sea animals from single-passenger, covered seal-skin boats called qajaq (Inuktitut syllabics: ᖃᔭᖅ) which were extraordinarily buoyant, and could easily be righted by a seated person, even if completely overturned. Because of this property the design was copied by Europeans, and Americans who still produce them under the Inuit name kayak.
Inuit also made umiaq ("woman's boat"), larger open boats made of wood frames covered with animal skins, for transporting people, goods and dogs. They were 6–12 m (20–39 ft) long and had a flat bottom so that the boats could come close to shore. In the winter, Inuit would also hunt sea mammals by patiently watching an aglu (breathing hole) in the ice and waiting for the air-breathing seals to use them. This technique is also used by the polar bear, who hunts by seeking out holes in the ice and waiting nearby.
On land, the Inuit used dog sleds (qamutik) for transportation. The husky dog breed comes from Inuit breeding of dogs and wolves for transportation. A team of dogs in either a tandem/side-by-side or fan formation would pull a sled made of wood, animal bones, or the baleen from a whale's mouth and even frozen fish, over the snow and ice. The Inuit used stars to navigate at sea and landmarks to navigate on land; they possessed a comprehensive native system of toponymy. Where natural landmarks were insufficient, the Inuit would erect an inukshuk.
Dogs played an integral role in the annual routine of the Inuit. During the summer they became pack animals, sometimes dragging up to 20 kg (44 lb) of baggage and in the winter they pulled the sled. Yearlong they assisted with hunting by sniffing out seals' holes and pestering polar bears. They also protected the Inuit villages by barking at bears and strangers. The Inuit generally favoured, and tried to breed, the most striking and handsome of dogs, especially ones with bright eyes and a healthy coat. Common husky dog breeds used by the Inuit were the Canadian Eskimo Dog, the official animal of Nunavut, (Qimmiq; Inuktitut for dog), the Greenland Dog, the Siberian Husky and the Alaskan Malamute. The Inuit would perform rituals over the newborn pup to give it favourable qualities; the legs were pulled to make them grow strong and nose was poked with a pin to enhance the sense of smell.
Inuit industry relied almost exclusively on animal hides, driftwood, and bones, although some tools were also made out of worked stones, particularly the readily worked soapstone. Walrus ivory was a particularly essential material, used to make knives. Art played a big part in Inuit society and continues to do so today. Small sculptures of animals and human figures, usually depicting everyday activities such as hunting and whaling, were carved from ivory and bone. In modern times prints and figurative works carved in relatively soft stone such as soapstone, serpentinite, or argillite have also become popular.
Inuit made clothes and footwear from animal skins, sewn together using needles made from animal bones and threads made from other animal products, such as sinew. The anorak (parka) is made in a similar fashion by Arctic peoples from Europe through Asia and the Americas, including the Inuit. Some Inuit, the hood of an amauti, (women's parka, plural amautiit) was traditionally made extra large, to allow the mother to carry a baby against her back and protect it from the harsh wind. Styles vary from region to region, from shape of the hood to length of the tails. Boots (kamik or mukluk) could be made of caribou or sealskin, and designs varied for men and women.
During the winter, certain Inuit lived in a temporary shelter made from snow called an iglu, and during the few months of the year when temperatures were above freezing, they lived in tents made of animal skins supported by a frame of bones. Some, such as the Siglit, used driftwood, while others built sod houses.
The division of labour in traditional Inuit society had a strong gender component, but it was not absolute. The men were traditionally hunters and fishermen and the women took care of the children, cleaned the home, sewed, processed food, and cooked. However, there are numerous examples of women who hunted, out of necessity or as a personal choice. At the same time men, who could be away from camp for several days at a time, would be expected to know how to sew and cook.
The marital customs among the Inuit were not strictly monogamous: many Inuit relationships were implicitly or explicitly sexual. Open marriages, polygamy, divorce, and remarriage were known. Among some Inuit groups, if there were children, divorce required the approval of the community and particularly the agreement of the elders. Marriages were often arranged, sometimes in infancy, and occasionally forced on the couple by the community.
Marriage was common for women at puberty and for men when they became productive hunters. Family structure was flexible: a household might consist of a man and his wife (or wives) and children; it might include his parents or his wife's parents as well as adopted children; it might be a larger formation of several siblings with their parents, wives and children; or even more than one family sharing dwellings and resources. Every household had its head, an elder or a particularly respected man.
There was also a larger notion of community as, generally, several families shared a place where they wintered. Goods were shared within a household, and also, to a significant extent, within a whole community.
The Inuit were hunter–gatherers, and have been referred to as nomadic. It is mistakenly believed that they had no government and no conception of either private property or ownership of land but they actually had very sophisticated concepts of private property and land ownership. Because these were so radically different from the concepts held by Europeans, the latter failed to recognise or document them until well into the 20th century.
One of the customs following the birth of an infant was for an Angakkuq (shaman) to place a tiny ivory carving of a whale into the baby's mouth, in hopes this would make the child good at hunting. Loud singing and drumming were also customary after a birth.
Virtually all Inuit cultures have oral traditions of raids by other indigenous peoples, including fellow Inuit, and of taking vengeance on them in return, such as the Bloody Falls Massacre. Western observers often regarded these tales as generally not entirely accurate historical accounts, but more as self-serving myths. However, evidence shows that Inuit cultures had quite accurate methods of teaching historical accounts to each new generation.
The historic accounts of violence against outsiders does make clear that there was a history of hostile contact within the Inuit cultures and with other cultures. It also makes it clear that Inuit nations existed through history, as well as confederations of such nations. The known confederations were usually formed to defend against a more prosperous, and thus stronger, nation. Alternately, people who lived in less productive geographical areas tended to be less warlike, as they had to spend more time producing food.
Justice within Inuit culture was moderated by the form of governance that gave significant power to the elders. As in most cultures around the world, justice could be harsh and often included capital punishment for serious crimes against the community or the individual. During raids against other peoples, the Inuit, like their non-Inuit neighbours, tended to be merciless.
"A pervasive European myth about Inuit is that they killed elderly and unproductive people.", but this is not generally true. In a culture with an oral history, elders are the keepers of communal knowledge, effectively the community library. Because they are of extreme value as the repository of knowledge, there are cultural taboos against sacrificing elders.
In Antoon A. Leenaars book Suicide in Canada he states that "Rasmussen found that the death of elders by suicide was a commonplace among the Iglulik Inuit." He heard of many old men and women who had hanged themselves. By ensuring they died a violent death, Inuit elders purified their souls for journey to the afterworld.
According to Franz Boas, suicide was "...not of rare occurrence..." and was generally accomplished through hanging. Writing of the Labrador Inuit, Hawkes (1916) was considerably more explicit on the subject of suicide and the burden of the elderly:
Aged people who have outlived their usefulness and whose life is a burden both to themselves and their relatives are put to death by stabbing or strangulation. This is customarily done at the request of the individual concerned, but not always so. Aged people who are a hindrance on the trail are abandoned.—Antoon A. Leenaars, Suicide in Canada
People seeking assistance in their suicide made three consecutive requests to relatives for help. Family members would attempt to dissuade the individual at each suggestion, but with the third request by a person, assistance became obligatory. In some cases, a suicide was a publicly acknowledged and attended event. Once the suicide had been agreed to, the victim would dress him or herself as the dead are clothed, with clothing turned inside out. The death occurred at a specific place, where the material possessions of deceased people were brought to be destroyed.
When food is not sufficient, the elderly are the least likely to survive. In the extreme case of famine, the Inuit fully understood that, if there was to be any hope of obtaining more food, a hunter was necessarily the one to feed on whatever food was left. However, a common response to desperate conditions and the threat of starvation was infanticide. A mother abandoned an infant in hopes that someone less desperate might find and adopt the child before the cold or animals killed it. The belief that the Inuit regularly resorted to infanticide may be due in part to studies done by Asen Balikci, Milton Freeman and David Riches among the Netsilik, along with the trial of Kikkik.
Anthropologists believed that Inuit cultures routinely killed children born with physical defects because of the demands of the extreme climate. These views were changed by late 20th century discoveries of burials at an archaeological site. Between 1982 and 1994, a storm with high winds caused ocean waves to erode part of the bluffs near Barrow, Alaska, and a body was discovered to have been washed out of the mud. Unfortunately the storm claimed the body, which was not recovered. But examination of the eroded bank indicated that an ancient house, perhaps with other remains, was likely to be claimed by the next storm. The site, known as the "Ukkuqsi archaeological site", was excavated. Several frozen bodies (now known as the "frozen family") were recovered, autopsies were performed, and they were re-interred as the first burials in the then-new Imaiqsaun Cemetery south of Barrow. Years later another body was washed out of the bluff. It was a female child, approximately 9 years old, who had clearly been born with a congenital birth defect. This child had never been able to walk, but must have been cared for by family throughout her life.
During the 19th century, the Western Arctic suffered a population decline of close to 90%, resulting from exposure to new diseases, including tuberculosis, measles, influenza, and smallpox. Autopsies near Greenland reveal that, more commonly pneumonia, kidney diseases, trichinosis, malnutrition, and degenerative disorders may have contributed to mass deaths among different Inuit tribes. The Inuit believed that the causes of the disease were of a spiritual origin.
Inuit traditional laws are anthropologically different from Western law concepts. 'Customary law' was thought non-existent in Inuit society before the introduction of the Canadian legal system. Hoebel, in 1954, concluded that only 'rudimentary law' existed amongst the Inuit. Indeed, prior to about 1970, it is impossible to find even one reference to a Western observer who was aware that any form of governance existed among any Inuit, however, there was a set way of doing things that had to be followed:
If an individual's actions went against the tirigusuusiit, maligait or piqujait, the angakkuq (shaman) might have to intervene, lest the consequences be dire to the individual or the community.
We are told today that Inuit never had laws or "maligait". Why? They say because they are not written on paper. When I think of paper, I think you can tear it up, and the laws are gone. The laws of the Inuit are not on paper.—Mariano Aupilaarjuk, Rankin Inlet, Nunavut, Perspectives on Traditional Law
The Inuit lived in an environment that inspired a mythology filled with adventure tales of whale and walrus hunts. Long winter months of waiting for caribou herds or sitting near breathing holes hunting seals gave birth to stories of mysterious and sudden appearance of ghosts and fantastic creatures. Some Inuit looked into the aurora borealis, or northern lights, to find images of their family and friends dancing in the next life. However, some Inuit believed that the lights were more sinister and if you whistled at them, they would come down and cut off your head. This tale is still told to children today. For others they were invisible giants, the souls of animals, a guide to hunting and as a spirit for the angakkuq to help with healing. They relied upon the angakkuq (shaman) for spiritual interpretation. The nearest thing to a central deity was the Old Woman (Sedna), who lived beneath the sea. The waters, a central food source, were believed to contain great gods.
The Inuit practised a form of shamanism based on animist principles. They believed that all things had a form of spirit, including humans, and that to some extent these spirits could be influenced by a pantheon of supernatural entities that could be appeased when one required some animal or inanimate thing to act in a certain way. The angakkuq of a community of Inuit was not the leader, but rather a sort of healer and psychotherapist, who tended wounds and offered advice, as well as invoking the spirits to assist people in their lives. His or her role was to see, interpret and exhort the subtle and unseen. Angakkuit were not trained; they were held to be born with the ability and recognised by the community as they approached adulthood.
Inuit religion was closely tied to a system of rituals integrated into the daily life of the people. These rituals were simple but held to be necessary. According to a customary Inuit saying,
The great peril of our existence lies in the fact that our diet consists entirely of souls.
By believing that all things, including animals, have souls like those of humans, any hunt that failed to show appropriate respect and customary supplication would only give the liberated spirits cause to avenge themselves.
The harshness and randomness of life in the Arctic ensured that Inuit lived with concern for the uncontrollable, where a streak of bad luck could destroy an entire community. To offend a spirit was to risk its interference with an already marginal existence. The Inuit understood that they had to work in harmony with supernatural powers to provide the necessities of day-to-day life. Before the 1940s, Inuit had minimal contact with Europeans, who passed through on their way to hunt whales or trade furs but seldom had any interest in settling down on the frozen land of the Arctic. So the Inuit had the place to themselves. They moved between summer and winter camps to always be living where there were animals to hunt.
But that changed. As World War II ended and the Cold War began, the Arctic became a place where countries that did not get along were close to each other. The Arctic had always been seen as inaccessible, but the invention of aircraft made it easier for non-Arctic dwellers to get there. Permanent settlements were created around new airbases and radar stations built to monitor rival nations, and schools and health care centres were built in these permanent settlements. In many places, Inuit children were required to attend schools that emphasised non-native traditions. With better health care, the Inuit population grew too large to sustain itself solely by hunting. Many Inuit from smaller camps moved into permanent settlements because there was access to jobs and food. In many areas Inuit were required to live in towns by the 1960s.
The lives of Paleo-Eskimos of the far north were largely unaffected by the arrival of visiting Norsemen except for mutual trade. Labrador Inuit have had the longest continuous contact with Europeans. After the disappearance of the Norse colonies in Greenland, the Inuit had no contact with Europeans for at least a century. By the mid 16th century, Basque fishermen were already working the Labrador coast and had established whaling stations on land, such as the one that has been excavated at Red Bay. The Inuit appear not to have interfered with their operations, but they raided the stations in winter for tools and items made of worked iron, which they adapted to their own needs.
Martin Frobisher's 1576 search for the Northwest Passage was the first well-documented post-Columbian contact between Europeans and Inuit. Frobisher's expedition landed in Frobisher Bay, Baffin Island, not far from the town now called Iqaluit which was long known as Frobisher Bay. Frobisher encountered Inuit on Resolution Island where five sailors left the ship, under orders from Frobisher, and became part of Inuit mythology. The homesick sailors, tired of their adventure, attempted to leave in a small vessel and vanished. Frobisher brought an unwilling Inuk to England, doubtless the first Inuk ever to visit Europe. The Inuit oral tradition, in contrast, recounts the natives helping Frobisher's crewmen, whom they believed had been abandoned.
The semi-nomadic eco-centred Inuit were fishers and hunters harvesting lakes, seas, ice platforms and tundra. While there are some allegations that Inuit were hostile to early French and English explorers, fishers and whalers, more recent research suggests that the early relations with whaling stations along the Labrador coast and later James Bay were based on a mutual interest in trade. In the final years of the 18th century, the Moravian Church began missionary activities in Labrador, supported by the British who were tired of the raids on their whaling stations. The Moravian missionaries could easily provide the Inuit with the iron and basic materials they had been stealing from whaling outposts, materials whose real cost to Europeans was almost nothing, but whose value to the Inuit was enormous and from then on contacts in Labrador were far more peaceful.
The European arrival tremendously damaged the Inuit way of life, causing mass death through new diseases introduced by whalers and explorers, and enormous social disruptions caused by the distorting effect of Europeans' material wealth. Nonetheless, Inuit society in the higher latitudes had largely remained in isolation during the 19th century. The Hudson's Bay Company opened trading posts such as Great Whale River (1820), today the site of the twin villages of Whapmagoostui and Kuujjuarapik, where whale products of the commercial whale hunt were processed and furs traded. The British Naval Expedition of 1821-3 led by Admiral William Edward Parry, which twice over-wintered in Foxe Basin, provided the first informed, sympathetic and well-documented account of the economic, social and religious life of the Inuit. Parry stayed in what is now Igloolik over the second winter. Parry's writings, with pen and ink illustrations of Inuit everyday life, and those of George Francis Lyon, both published in 1824 were widely read. Captain George Comer's Inuit wife Shoofly, known for her sewing skills and elegant attire, was influential in convincing him to acquire more sewing accessories and beads for trade with Inuit.
During the early 20th century a few traders and missionaries circulated among the more accessible bands, and after 1904 they were accompanied by a handful of Royal Canadian Mounted Police (RCMP). Unlike most Aboriginal peoples in Canada, however, the lands occupied by the Inuit were of little interest to European settlers — to the southerners, the homeland of the Inuit was a hostile hinterland. Southerners enjoyed lucrative careers as bureaucrats and service providers to the north, but very few ever chose to visit there. Canada, with its more hospitable lands largely settled, began to take a greater interest in its more peripheral territories, especially the fur and mineral-rich hinterlands. By the late 1920s, there were no longer any Inuit who had not been contacted by traders, missionaries or government agents. In 1939, the Supreme Court of Canada found, in a decision known as Re Eskimos, that the Inuit should be considered Indians and were thus under the jurisdiction of the federal government.
Native customs were worn down by the actions of the RCMP, who enforced Canadian criminal law on Inuit, such as Kikkik, who often could not understand what they had done wrong, and by missionaries who preached a moral code very different from the one they were used to. Many of the Inuit were systematically converted to Christianity in the 19th and 20th centuries, through rituals like the Siqqitiq.
World War II and the Cold War made Arctic Canada strategically important for the first time and, thanks to the development of modern aircraft, accessible year-round. The construction of air bases and the Distant Early Warning Line in the 1940s and 50s brought more intensive contacts with European society, particularly in the form of public education, which instilled and enforced foreign values disdainful of the traditional structure of Inuit society.
In the 1950s the High Arctic relocation was undertaken by the Government of Canada for several reasons. These were to include protecting Canada's sovereignty in the Arctic, alleviating hunger (as the area currently occupied had been over-hunted), and attempting to solve the "Eskimo problem", meaning the assimilation and end of the Inuit culture. One of the more notable relocation's was undertaken in 1953, when 17 families were moved from Port Harrison (now Inukjuak, Quebec) to Resolute and Grise Fiord. They were dropped off in early September when winter had already arrived. The land they were sent to was very different from that in the Inukjuak area; it was barren, with only a couple of months when the temperature rose above freezing and several months of polar night. The families were told by the RCMP they would be able to return within two years if conditions were not right. However, two years later more families were relocated to the High Arctic and it was to be thirty years before they were able to visit Inukjuak.
By 1953, Canada's prime minister Louis St. Laurent publicly admitted, "Apparently we have administered the vast territories of the north in an almost continuing absence of mind." The government began to establish about forty permanent administrative centres to provide education, health and economic development services. Inuit from hundreds of smaller camps scattered across the north, began to congregate in these hamlets.
Regular visits from doctors, and access to modern medical care raised the birth rate and decreased the death rate, causing an enormous natural increase. Before long, the Inuit population was beyond the carrying capacity of the ecosystem (that which hunting and fishing could support). By the mid-1960s, encouraged first by missionaries, then by the prospect of paid jobs and government services, and finally forced by hunger and required by police, all Canadian Inuit lived year-round in permanent settlements. The nomadic migrations that were the central feature of Arctic life had for the most part disappeared. The Inuit, a once self-sufficient people in an extremely harsh environment were, in the span of perhaps two generations, transformed into a small, impoverished minority, lacking skills or resources to sell to the larger economy, but increasingly dependent on it for survival.
Although anthropologists like Diamond Jenness (1964) were quick to predict that Inuit culture was facing extinction, Inuit political activism was already emerging.
In the 1960s, the Canadian government funded the establishment of secular, government-operated high schools in the Northwest Territories (including what is now Nunavut) and Inuit areas in Quebec and Labrador along with the residential school system. The Inuit population was not large enough to support a full high school in every community, so this meant only a few schools were built, and students from across the territories were boarded there. These schools, in Aklavik, Iqaluit, Yellowknife, Inuvik and Kuujjuaq, brought together young Inuit from across the Arctic in one place for the first time, and exposed them to the rhetoric of civil and human rights that prevailed in Canada in the 1960s. This was a real wake-up call for the Inuit, and it stimulated the emergence of a new generation of young Inuit activists in the late 1960s who came forward and pushed for respect for the Inuit and their territories.
The Inuit began to emerge as a political force in the late 1960s and early 1970s, shortly after the first graduates returned home. They formed new politically active associations in the early 1970s, starting with the Inuit Tapirisat of Canada (Inuit Brotherhood and today known as Inuit Tapiriit Kanatami), an outgrowth of the Indian and Eskimo Association of the 60s, in 1971, and more region specific organisations shortly afterwards, including the Committee for the Original People's Entitlement (representing the Inuvialuit), the Northern Quebec Inuit Association (Makivik Corporation) and the Labrador Inuit Association. These activist movements began to change the direction of Inuit society in 1975 with the James Bay and Northern Quebec Agreement. This comprehensive land claims settlement for Quebec Inuit, along with a large cash settlement and substantial administrative autonomy in the new region of Nunavik, set the precedent for the settlements to follow. The Labrador Inuit submitted their land claim in 1977, although they had to wait until 2005 to have a signed land settlement establishing Nunatsiavut.
In 1982, the Tunngavik Federation of Nunavut (TFN) was incorporated, in order to take over negotiations for land claims on behalf of the Inuit living in the eastern Northwest Territories, that would later become Nunavut, from the Inuit Tapiriit Kanatami, which became a joint association of the Inuit of Quebec, Labrador and the Northwest Territories.
The Inuvialuit are western Canadian Inuit who remained in the Northwest Territories when Nunavut split off. They live primarily in the Mackenzie River delta, on Banks Island, and parts of Victoria Island in the Northwest Territories. They are officially represented by the Inuvialuit Regional Corporation and, in 1984, received a comprehensive land claims settlement, the first in Northern Canada, with the signing of the Inuvialuit Final Agreement.
The TFN worked for ten years and, in September 1992, came to a final agreement with the Government of Canada. This agreement called for the separation of the Northwest Territories into an eastern territory whose aboriginal population would be predominately Inuit, the future Nunavut, and a rump Northwest Territories in the west. It was the largest land claims agreement in Canadian history. In November 1992, the Nunavut Final Agreement was approved by nearly 85% of the Inuit of what would become Nunavut. As the final step in this long process, the Nunavut Land Claims Agreement was signed on May 25, 1993, in Iqaluit by Prime Minister Brian Mulroney and by Paul Quassa, the president of Nunavut Tunngavik Incorporated, which replaced the TFN with the ratification of the Nunavut Final Agreement. The Canadian Parliament passed the supporting legislation in June of the same year, enabling the 1999 establishment of Nunavut as a territorial entity.
With the establishment of Nunatsiavut in 2005, all the traditional Inuit lands in Canada are now covered by some sort of land claims agreement providing for regional autonomy.
Inuit communities in Canada continue to suffer under crushing unemployment, overcrowded housing, substance abuse, crime, violence and suicide. The problems Inuit face in the 21st century should not be underestimated. However, many Inuit are upbeat about the future. Arguably, their situation is better than it has been since the 14th century. Inuit arts, carving, print making, textiles and throat singing, are very popular, not only in Canada but globally, and Inuit artists are widely known. Indeed, Canada has, metaphorically, adopted some of the Inuit culture as a sort of national identity, using Inuit symbols like the inukshuk in unlikely places, such as its use as a symbol at the 2010 Winter Olympics in Vancouver. Respected art galleries display Inuit art, the largest collection of which is at the Winnipeg Art Gallery. Some Inuit languages such as Inuktitut, appears to have a more secure future in Quebec and Nunavut. There are a surprising number of Inuit, even those who now live in urban centres such as Ottawa, Montreal and Winnipeg, who have experienced living on the land in the traditional life style. People such as Legislative Assembly of Nunavut member, Levinia Brown and former Commissioner of Nunavut and the NWT, Helen Maksagak were born and lived the early part of their life "on the land". Inuit culture is alive and vibrant today in spite of the negative impacts of recent history.
On October 30, 2008, Leona Aglukkaq was appointed as Minister of Health, "[becoming] the first Inuk to hold a senior cabinet position, although she is not the first Inuk to be in cabinet altogether." Jack Anawak and Nancy Karetak-Lindell were both parliamentary secretaries respectively from 1993-96 and in 2003.
The Thule people arrived in Greenland in the 13th century. There they encountered the Norsemen, who had established colonies there since the late 10th century, as well as a later wave of the Dorset people. Because most of Greenland is covered in ice, the Greenland Inuit (or Kalaallit) only live in coastal settlements, particularly the northern polar coast, the eastern Amassalik coast and the central coasts of western Greenland. In 1953, Denmark put an end to the colonial status of Greenland and granted home rule in 1979 and in 2008 a self-government referendum was passed with 75% approval. Although a part of the Kingdom of Denmark, Greenland, known as Kalaallit Nunaat, maintains much autonomy today. Of a population of 55,000, 80% of Greenlanders identify as Inuit. Their economy is based on fishing and shrimping.
The Inuit of Alaska are the Inupiat (from Inuit- people - and piaq/piat real, i.e. 'real people') who live in the Northwest Arctic Borough, the North Slope Borough and the Bering Straits region. Barrow, the northernmost city in the United States, is in the Inupiat region. Their language is Iñupiaq (which is the singular form of Inupiat).
In recent years, circumpolar cultural and political groups like the Inuit Circumpolar Council have come together to promote the Inuit and other northern people and to fight against ecological problems, such as climate change, which disproportionately affects the Inuit population. Global warming may cause Arctic mammal populations to decline. However, a study by Mitch Taylor, polar bear biologist with the Government of Nunavut, shows that, contrary to the dire predictions, eleven of thirteen polar bear populations have remained stable or increased. The study also shows that the number of polar bears in western Hudson Bay is decreasing due to the effect of global warming, while the decrease of the population in Baffin Bay is directly associated with the over hunting of the bears by Greenland hunters.
Well-known Inuit politicians include Premier of Nunavut, Eva Aariak, Nancy Karetak-Lindell, former MP for the riding of Nunavut, and Leona Aglukkaq, current MP and Federal Health Minister since 2008.
An important biennial event, the Arctic Winter Games, is held in communities across the northern regions of the world, featuring traditional Inuit and northern sports as part of the events. A cultural event is also held. The games were first held in 1970, and while rotated usually among Alaska, Yukon and the Northwest Territories, they have also been held in Schefferville, Quebec in 1976, in Slave Lake, Alberta, and a joint Iqaluit, Nunavut-Nuuk, Greenland staging in 2002. In other sporting events, Jordin Tootoo became the first Inuk to play in the National Hockey League in the 2003-04 season, playing for the Nashville Predators.
Although Inuit life has changed significantly over the past century, many traditions continue. Inuit Qaujimajatuqangit, or traditional knowledge, such as storytelling, mythology, music and dancing remain important parts of the culture. Family and community are very important. The Inuktitut language is still spoken in many areas of the Arctic and is common on radio and in television programming.
Visual and performing arts are strong. In 2002 the first feature film in Inuktitut, Atanarjuat, was released worldwide to great critical and popular acclaim. It was directed by Zacharias Kunuk, and written, filmed, produced, directed, and acted almost entirely by the Inuit of Igloolik. In 2009 the film, Le Voyage D'Inuk, a Greenlandic language feature film directed by Mike Magidson and co-written by Magidson and French film producer Jean-Michel Huctin. One of the most famous Inuit artists is Pitseolak Ashoona. Susan Aglukark is a popular singer. Mitiarjuk Attasie Nappaaluk works at preserving Inuktitut and has written the first novel published in that language. In 2006, Cape Dorset was hailed as Canada's most artistic city, with 23% of the labour force employed in the arts. Inuit art such as soapstone carvings is one of Nunavut's most important industries.
Recently, there has been an identity struggle among the younger generations of Inuit, between their traditional heritage and the modern society which their cultures have been forced to assimilate into in order to maintain a livelihood. With current dependence on modern society for necessities, (including governmental jobs, food, aid, medicine, etc.), the Inuit have had much interaction with and exposure to the societal norms outside their previous cultural boundaries. The stressors regarding the identity crisis among teenagers have led to disturbingly high numbers of suicide.
A series of authors has focused upon the increasing myopia in the youngest generations of Inuit. Myopia was almost unknown prior to the Inuit adoption of western culture. This phenomenon is also seen in other cultures (for example, Vanuatu). Principal theories are the change to a less nutritious western style diet, and extended education.
Inuit (plural Inuit)
The northern indigenous peoples of North America used to be called Eskimo, but the term has fallen out of use and is considered offensive in Canada and Greenland, because it was once thought to stem from a pejorative (see Eskimo). Inuit is the accepted term in Canada, and has gained some currency in the United States. However, Eskimo continues to be the prevalent name in Alaska for both the Inuit Inupiat people and the non-Inuit Yupik.
Many dictionaries don't consider the plural form Inuits. Inuit is usually used as an ethnonym with no singular form (like Chinese). The need to treat Inuit as a singular is obviated by wider recognition of its etymological singular form Inuk in recent times.
The Inuit language comprises a continuum of locally-intelligible dialects, with their own variations of the name for themselves and their own language. A number of these names have official status.
They are sometimes called Eskimos, a word which likely comes from the Algonquin language and may mean "eater of raw meat". Some Inuit do not like to be called Eskimos but some do. Inuit in Canada and Greenland like the name Inuit because it is a name they made. Inuit means more than one, one person is an "Inuk". The native Greenlanders are related to the Inuit. The language of the Inuit is Inuktitut, and it is one of the official languages of Nunavut and of the Northwest Territories in Canada.
Eskimos did not have any wood to burn fires, and that is why they ate raw meat. The little bit of wood they rarely found was too important to burn, and it had to be used for other things. The only fire they had was blubber lamps. These burned low and gave off only a little heat. It took a long time to cook a meal over one. So, the Eskimos often ate their meat without cooking it.
Eskimos were also nomads, but they did not domesticate any animals except for dogs, which they used to pull their sleds and help with the hunting. They were hunter/gatherers, living off whatever they found or killed. They were very careful to make good use of every part of the animals they killed.
Eskimos lived in tents made of animal skins during the summer. In the winter they lived in sod houses and igloos. They could build an igloo out of snow bricks in just a couple of hours. Snow is full of air spaces, which helps it hold in warmth. With just a blubber lamp for heat, an igloo could be warmer than the air outside. The Eskimos made very clever things from the bones, antlers, and wood they had. They invented the , which was used to hunt seals and whales. They built boats from wood or bone covered with animal skins. They invented the kayak for one man to use for hunting the ocean and among the pack ice.
Eskimo sleds could be built from wood, bone, or even animal skins wrapped around frozen fish. Dishes were made from carving soapstone, bones, or musk ox horns. They wore two layers of skins, one fur side in, the other facing out, to stay warm.
Eskimos had to be good hunters to survive. In the winter, seals did not come out onto the ice. They only came up for air at holes they chewed in the ice. Eskimos would use their dogs to find the air holes, then wait patiently until the seal came back to breathe and kill it with a . In the summer, the seals would lie out on the ice enjoying the sun. The hunter would have to slowly creep up on a seal to kill it.
The Eskimos would use their dogs and spears to hunt polar bears, musk ox, and caribou. Sometimes they would kill caribou from their boats as the animals crossed the rivers on their migration. The Eskimos even hunted whales. From their boat, they would throw harpoons that were attached to floats made of seal skins. The whale would grow tired from dragging the floats under the water. When it slowed down and came up to the surface, the Eskimos could keep hitting it with more harpoons or spears until it died.
During the summer months, the Eskimos were able to gather berries and roots to eat. They also collected grass to line their boots or make baskets. Often the food they found or killed during the summer was put into a cache for use during the long winter. A cache was created by digging down to the permafrost and building a rock lined pit there. The top would be covered with a pile of rocks to keep out the animals. It was as good as a freezer, because the food would stay frozen there until the family needed it. Eskimos did not have a government or laws. They learned early in life to help each other in order to survive. They always shared food, since it was often so hard to find. They usually moved around in small groups looking for food, and sometimes they would get together with other groups to hunt for larger animals such as whales. The men did the hunting and home building, and also made weapons, sleds, and boats. The women cooked, made the clothes, and took care of the children.
Today, most Eskimos live in modern houses built by the government of their country. Many still hunt or fish for some of their food and income. They use rifles and when they go. They sell some of the fish they catch or the beautiful things they make for extra money. In Alaska, many of the people have received money from the oil discovered in that state. However, there are not many jobs for people in the Arctic. Often they must have help from the government to survive. The Arctic is very different from the rest of the world that the way of life in the south does not work well in the north. |
Speech Rhythm of American English
How to make your speech more effective
Just like music, English speech has a beat. The combination of strong and weak beats creates the rhythm of English. English speakers expect to hear a certain rhythm when they listen to you. In Lesson 1, you will learn the two basics of creating a good rhythm. First, let’s watch the video to see the overview of how to make your speech more effective.
Basic Rhythm 1: Syllable Stress in Words
A syllable is a part of a word that has a vowel sound. You create the basic rhythm by stressing and unstressing vowel sounds. Stressing the vowel sound means you say it louder, longer, and higher in pitch. Remember one syllable is stressed in words with two or more syllables. If you stress the wrong syllable, your listeners might not understand you. Look at the pictures below. Which syllable do you stress to say "two lips" and "tulips"?
Watch the video below and practice stressing one syllable in a word. Clap your hands along with the speaker.
Basic Rhythm II: Counting Syllables in a Word
Syllables are the number of beats in a word. Counting the number of syllables will help you became aware of the rhythm. Can you count the number of syllables in a word? Let’s place your thumb under your chin. Count the number of times you move your jaw while you say a word. That will give you the number of syllables in that word. |
Cownose rays are related to sharks and skates. This stingray belongs to the Family Myliobatidae, which includes bat rays, manta rays and eagle rays.
Cownose rays get their name from their unique forehead, which resembles the nose of a cow. They are brown to olive-colored on top with no spots, and pale below. Cownose males are about 2½ feet across. Females are 2-3 feet across.The tail is about twice as long as the body. Beach-goers sometimes mistake these rays for sharks. When the rays are swimming near the surface, the tips of the wings sometimes stick out of the water, resembling a shark's dorsal fin.
Cownose rays can be found in the Atlantic Ocean along western Africa, the eastern U.S., the Gulf of Mexico and parts of the Caribbean. They are considered an open ocean species, but can inhabit inshore, shallow bays and estuaries. They prefer warm temperate and tropical waters to depths of 72 feet. Many gather in Chesapeake Bay during the summer months.
Cownose rays feed on bottom-dwelling shellfish, lobster, crabs and fish. To locate their prey, cownose rays have electroreceptors on their snouts as well as excellent senses of smell and touch. They will stir up the bottom with their flexible wing tips or use their noses to root around in the mud or sand. Once they find their prey, they flap their wings rapidly to move the sand aside.
They suck water and sand into their mouths and blow it out through their gills to create a depression in the sand that allows easier access to their food.
They have very strong teeth arranged in flat plates that are perfect for crunching hard-shelled prey. These rays spit out the shells of the animals they eat, and only swallow the soft body parts.
Stingrays are known for their stingers, but they are actually very docile creatures.
Cownose rays school and migrate in large groups, sometimes up to thousands of individuals. They are strong swimmers and can migrate long distances. Scientists believe that the migrations may be triggered by seasonal changes in water temperature and sun orientation.
They have been seen jumping clear out of the water and landing on their bellies, making loud smacking sounds. They don't rest on the bottom as much as other types of stingrays.
Cobia and a variety of sharks will prey on cownose stingrays. Many sharks have been found with barbs from cownose rays embedded in their heads and jaws. |
What causes tooth decay?
Tooth decay, also known as caries or cavities, is an oral disease that affects many people. Natural bacteria live in your mouth and form plaque. The plaque interacts with deposits left on your teeth from sugary and starchy foods and produces acids. These acids damage tooth enamel over time by dissolving, or demineralizing enamel, which weakens the teeth and leads to tooth decay. Tooth decay is not life threatening and is highly preventable.
What types of foods may contribute to tooth decay?
Foods containing carbohydrates (starches and sugars), such as soda pop, candy, ice cream, milk, and cake, and even some fruits, vegetables, and juices, may contribute to tooth decay.
How can cavities be prevented?
The acids formed by plaque can be counteracted by the saliva in your mouth, which acts as a buffer and remineralizing agent. Dentists often recommend chewing sugarless gum to stimulate saliva flow. However, the best way to prevent cavities is to brush and floss regularly. Fluoride, a natural substance that helps to remineralize the tooth structure, makes the tooth more resistant to the acids and helps to correct damage produced by the plaque bacteria. Fluoride is added to toothpaste and water sources to help fight cavities. Your dentist also may recommend that you use special high concentration fluoride gels, mouth rinses, or dietary fluoride supplements. In addition, professional strength anti-cavity varnish or sealants may be recommended.
Who is at risk for cavities?
Because we all carry bacteria in our mouths, everyone is at risk for cavities. Those with a diet high in carbohydrates and sugary foods and those who live in communities without fluoridated water are likely candidates for cavities. Also, those with a lot of fillings have a higher chance of developing tooth decay because the area around the restored portion of a tooth is a good breeding ground for bacteria. In general, children and senior citizens are the two groups at the highest risk for cavities.
What can I do to help protect my teeth?
The best way to combat cavities is to follow three simple steps:
- Cut down on sweets and between-meal snacks. Remember, sugary and starchy foods put your teeth at risk.
- Brush after every meal and floss daily. Cavities most often begin in hard-to clean areas between the teeth and in the fissures and pits on the biting surfaces of the teeth. Hold the toothbrush at a 45-degree angle and brush inside, outside, on top of, and in between your teeth. Replace your toothbrush every few months. Only buy toothpastes and rinses that contain fluoride.
- See your dentist at least every six months for checkups and professional cleanings. Because cavities can be difficult to detect, a thorough dental examination is very important. If left untreated, cavities can lead to permanent loss of the tooth structure, root canal therapy, and even loss of the tooth.
For more information, talk with your general dentist.
In general, children and senior citizens are the two groups at highest risk for cavities. |
Hydrogen Benefits and Considerations
Hydrogen can be produced from diverse domestic resources with the potential for near-zero greenhouse gas emissions. Once produced, hydrogen generates power in a fuel cell, emitting only water vapor and warm air. It holds promise for growth in both the stationary and transportation energy sectors.
The United States relies heavily on foreign oil to power its transportation sector. Transportation accounts for about 71% of the U.S. petroleum consumption and our country imported about 40% of the petroleum it consumed in 2012. With much of the worldwide petroleum reserves located in politically volatile countries, the United States is vulnerable to supply disruptions.
Hydrogen can be produced domestically from resources like natural gas, coal, solar energy, wind, and biomass. When used to power highly efficient fuel cell vehicles, hydrogen holds the promise of offsetting petroleum in transportation.
Public Health and Environment
About half of the U.S. population lives in areas where air pollution levels are high enough to negatively impact public health and the environment. Emissions from gasoline and diesel vehicles—such as nitrogen oxides, hydrocarbons, and particulate matter—are a major source of this pollution. Hydrogen-powered fuel cell vehicles emit none of these harmful substances. Their only emission is H2O—water and warm air.
The environmental and health benefits are even greater when hydrogen is produced from low- or zero-emission sources, such as solar, wind, and nuclear energy and fossil fuels with advanced emission controls and carbon sequestration. Because the transportation sector accounts for about one-third of U.S. carbon dioxide emissions (which contribute to climate change), using these sources to produce hydrogen for transportation can slash greenhouse gas emissions. Learn more about hydrogen emissions.
Hydrogen’s energy content by volume is low. This makes storing hydrogen a challenge because it requires high pressures, low temperatures, or chemical processes to be stored compactly. Overcoming this challenge is important for light-duty vehicles because they often have limited size and weight capacity for fuel storage.
The storage capacity for hydrogen in light-duty vehicles should enable a driving range of more than 300 miles to meet consumer needs. Because hydrogen has a low volumetric energy density compared with gasoline, storing this much hydrogen on a vehicle currently requires a larger tank than most conventional vehicles. Learn more about hydrogen storage challenges from the Fuel Cell Technologies Program.
To be competitive in the marketplace, the cost of fuel cells will have to decrease substantially without compromising vehicle performance. See the Department of Energy Hydrogen and Fuel Cells Office Plan for plans and projections for the future of hydrogen and fuel cells. |
The rapid movements of the field's axis to the east in the last few hundred years could be a precursor to the north and south poles trading places, the researchers suggest. "We kind of speculate there is that connection but the chaos in the core is going to prevent us from making accurate predictions for a long time. "What we found that is interesting in our models is a correlation between these transient [shifts] and reversals [of Earth's magnetic field]," says Olson. "
Bruce Buffett of the University of California, Berkeley, says the authors present an intriguing proof of concept with their model. "They are suggesting very cautiously that maybe this rapid change is somehow suggestive of us going into a reversal event," he says.
"You could imagine if the field were to collapse it would have disastrous consequences for communication systems and power grids."
How Much Should We Fear Incoming Solar Activity?
Moreover, even with a weakened magnetic field, Earth's thick atmosphere also offers protection against the sun's incoming particles A weaker field would certainly lead to a small increase in solar radiation on Earth - as well as a beautiful display of aurora at lower latitudes -- but nothing deadly. But, while Earth's magnetic field can indeed weaken and strengthen over time, there is no indication that it has ever disappeared completely. According to NASA, it is a mistake to assume that a pole reversal would momentarily leave Earth without the magnetic field that protects us from solar flares and coronal mass ejections from the sun.
Movement of Earth's North Magnetic Pole Accelerating Rapidly
After some 400 years of relative stability, Earth's North Magnetic Pole has moved nearly 1,100 kilometers out into the Arctic Ocean during the last century and at its present rate could move from northern Canada to Siberia within the next half-century.
If that happens, Alaska may be in danger of losing one of its most stunning natural phenomena - the Northern Lights.
However, rapid movement of the magnetic pole doesn't necessarily mean that our planet is going through a large-scale change that would result in the reversal of the Earth's magnetic field. It may also be part of a normal oscillation. Calculations of the North Magnetic Pole's location from historical records goes back only about 400 years, while polar observations trace back to John Ross in 1838 at the west coast of Boothia Peninsula.
No Reason To Panic
Earth's magnetic field has flipped its polarity many times over the millennia and reversals are the rule, not the exception.
Earth has settled in the last 20 million years into a pattern of a pole reversal about every 200,000 to 300,000 years, although it has been more than twice that long since the last reversal. A reversal happens over hundreds or thousands of year, and not over night.
This means a magnetic pole reversal is not a sign of doomsday. |
Sharks and other cartilaginous fishes (i.e., rays, skates, and chimaeras), hereafter simply referred to as ‘sharks’, have been living in our oceans for over 400 million years. This makes them some of the world’s oldest living vertebrate species. In comparison, Tyrannosaurus Rex lived around 67 million years ago. Within the last few decades, many shark populations have declined, mostly due to the rising demand for shark products, such as shark fins and meat, as well as the appearance of advanced fishing technologies in several gear types (i.e., longlines, purse seines, and gillnets). The situation is aggravated due to the poor representation of sharks in most fisheries management plans, unreliable data on landings, catches, and global trade in fins, and a lack of political will and resources to conserve these vulnerable species. The International Union for the Conservation of Nature (IUCN) categorizes more than half of global shark species as threatened, vulnerable, or endangered to extinction.
The only international framework for conserving and managing sharks is the International Plan of Action for the Conservation and Management of Sharks (IPOA-sharks), developed and implemented by the United Nations Food and Agriculture Organization (FAO) in 1999. The plan provides guidelines to improve data collection and research, to identify priority species for conservation, and to develop and implement initiatives for education outreach and collaborative consultation. Although this framework is non-binding and ill-enforced, individual countries are strongly encouraged to develop National Plans of Actions (NPOAs). Today, not all countries have adopted NPOAs and most existing plans fall short in following the FAO guidelines, according to a recent article in Marine Policy by researchers from Dalhousie University.
The researchers reviewed Canada’s 2007 NPOA for its overall effectiveness towards reducing total shark mortality in the Pacific, Arctic, and Atlantic Oceans, and compared it to Australia’s NPOA. They found that Canada, although recognized as a leader in shark management, lacks firm commitments to sustainably manage non-commercial shark species, and other cartilaginous fishes. While Canada’s NPOA includes data on commercial shark species, it runs short in describing and addressing the threats to non-commercial species from bycatch. This warrants attention particularly because the role of Canada – and several other countries – in depleting shark populations, as a result of shark bycatch and discarding at sea. Canada’s NPOA is currently under revision (plans should be revised every four years), which provides the opportunity to learn from countries that have followed the FAO guidelines more closely, like Australia. This includes comprehensive bycatch management plans, best handling practices, increased observer reporting, and improved surveillance options. Similar measures should be adopted in Canada’s revised plan, so that the country can once again be considered a leader in shark management and conservation.
Several management tools used to reduce shark mortality are becoming increasingly popular. Spatial and temporal protected areas can minimize bycatch of sensitive species. Shark fin bans, and the adoption of bycatch mitigation techniques in high catch fisheries can help reduce waste and incidental catches. For example, the authors state the protection of an area identified as a potential nursery ground for the endangered porbeagle shark in the Grand Banks, off Newfoundland, may be essential for rebuilding the regional population. The proposed area could be declared as a no shark-fishing zone during the summer months when porbeagle sharks mate. Another management tool is setting quotas on bycatch. The U.S. and New Zealand developed such mechanisms for sea turtle and sea lion bycatch, respectively. Once a defined quota is reached, the fishery is closed for the season. This encourages less impactful fishing methods. Another approach to incentivize better fishing techniques is to place taxes on bycatch. Both quotas and taxes on bycatch require sophisticated monitoring mechanisms like video surveillance on fishing boats to enhance compliance. Including sharks into innovative management frameworks and adopting proper tools to effectively implement measures outlined in NPOAs is necessary to ensuring the survival of endangered species.
Developing NPOAs that closely follow FAO guidelines, and plans that are revised at least every four years, can be a powerful tool for sustainably managing shark populations. Most countries have not taken adequate actions, including Canada. Political will and concerted efforts of all countries are therefore required to reverse the trend of rapidly declining shark populations. Particularly important are key policies that improve data collection and research, manage bycatch, educate fishermen on species identification and handling practices, and foster coordination with stakeholders. Adopting management tools as seen in other fisheries is long overdue to achieve stable shark populations, and merits more attention from both researchers and policy-makers. |
Bats are perhaps best known for their sophisticated use of sound: Like a ship’s sonar, the flying mammals make high-pitched noises and listen for returning echoes to navigate and hunt, an ability known as echolocation. But one family—the fruit bats—doesn’t use this sort of advanced tracking. Now, a new study suggests that all bats were once able to echolocate in this fashion, providing new evidence in a decades-long debate and shedding light on the origins of bat sonar.
Evolutionary biologists have long been divided over how bats developed their sonar. Fruit bats are closely related to a group of bats that are expert echolocators. Some say this means that advanced echolocation evolved once; an ancient bat developed the ability and passed it on to successive bat species, but fruit bats lost it along the way. Others argue that advanced echolocation evolved twice—once in an ancient ancestor bat, and again in the close relatives of fruit bats—and that fruit bats never had it.
Scientists have tried settling the question by looking for hard-to-find fossils of ancient bats, and by examining the genes of modern bats for clues about their past lifestyle. But Emma Teeling, an evolutionary biologist at University College Dublin, and colleagues from Shenyang Agricultural University in China, looked at a different window into the past: modern bat ears. Sonar-wielding bats have extra-large cochleae, coiled ear bones that they use to pick up tiny differences in the pitch of returning echoes. The cochleae of adult fruit bats, on the other hand, are much smaller—more like other mammals’ that don’t echolocate. But the team suspected that they might still show traces of their echolocating ancestry.
Using x-ray microradiographs, the researchers examined the developing fetuses of seven species of bat: two fruit bats, including the short-nosed fruit bat (Cynopterus sphinx), and five bats that use echolocation, including the great leaf-nosed bat (Hipposideros armiger). For comparison, they also looked at the cochleae of developing fetuses of five other mammals, including cats and rats.
The baby fruit bat’s cochleae were similar in size to those of echolocating bats, and they were about 65% larger than those of the other mammals, the team reports today in Nature Ecology & Evolution. That means the direct ancestors of fruit bats probably used echolocation—the large fetal cochleae are a sort of “living fossil” from an earlier time. And if it is true that fruit bats lost their ability to echolocate, then it’s likely that bat sonar evolved only once.
But not everyone is convinced. Rick Adams, an evolutionary ecologist at the University of Northern Colorado in Greeley, doesn’t agree with either side of the debate. He subscribes to a different version of the bat family tree, which has two main branches: one with all the bats that echolocate and one with all the bats that don’t. In his version, advanced echolocation also evolved only once, but only after the fruit bats split from the rest of the tree.
“As a preliminary study, it’s pretty interesting,” Adams says. But he adds that he would have chosen other mammals to compare with the fruit bats. Cats and rats are not that closely related to bats, Adams says; mammals like tree shrews or lemurs might be better, because it’s possible they could share fruit bats’ large cochleae.
But for many others, the discovery is a welcome one. “I really applaud the authors for taking us out of the wilderness,” says bat biologist Brock Fenton of Western University in London, Canada. “It was an endless argument.” |
A septic tank is a sealed underground container — generally made of reinforced concrete — that collects and processes sewage discharge from residential or commercial structures. Common sources of wastewater include toilets, showers, sinks, dishwashers, and washing machines. Septic tanks are used wherever it is not practical or cost-effective to tie into a city sewage network. As a result, most rural and remote structures use septic systems to discharge household wastewater.
(Drawing reproduced courtesy Ohio State University).
Naturally-present anaerobic bacteria attack wastewater solids, reducing them to a liquid state safe enough to discharge through a drainfield into the surrounding soil. While septic systems are simple in concept, the micro-bacterial processes are very complex. As a result, proper maintenance and common sense go a long way to ensuring an optimally performing septic system.
Since residential septic systems are designed for specific capacities (generally in the 500-1500 gallon range), exceeding the design capacity will limit the effectiveness of the organic processing. This leads to early failure, which then becomes an inconvenience to the homeowners until the problems are remedied. Generally, newer systems must be at least 1000 gallon capacity or greater. Some municipalities even require dual-container systems, which provide better decomposition of waste solids.
Since roughly half of the organic waste does not decompose naturally, periodic pumping is required every few years to keep septic tanks in peak operating condition (see Table 1). This removes any remaining organic sludge and other wastes that accumulate at the bottom of the tank.
Limiting unnecessary water flow into the system also helps. Therefore, washing machines and dishwashers used with bio-degradable detergents that produce “gray water” could be discharged separately rather than through the septic system.
Furthermore, since the organic environment inside a septic tank is susceptible to disruption, never introduce solvents, bleach, acids, or petroleum products into the system. Solids can begin to overflow into the drainfield, causing them to clog — resulting in a very expensive repair.
Expensive additives that claim to promote improved decomposition are not necessary. In fact, the larger scientific community specifically recommends against their use. Typical septic systems are full of bacteria, which if left on their own in a properly maintained environment, will promote natural decomposition. If you are injecting such additives, you may be wasting your money or, worse, hampering the natural processes at work.
Evolution of the Septic Tank In the early days when man needed some privacy and protection from the elements, he dug a hole in the ground, lined it with stone, brick, wood or other available material and built an “outhouse” style structure. Delivery of waste to its final resting place was by gravity. Once the hole filled up, the outhouse was moved to a new location. Sometimes lime or ashes were used to subdue the odors.
It wasn’t until the mid-1800’s that indoor plumbing and the toilet took root. Only in 1880 did toilet paper come into existence, courtesy of the British Perforated Paper Company. Alas, man was then able to relax in the comfort of home.
Although the outhouse evantually began to disappear, proper waste disposal remained an issue. The practical solution was to connect a pipe to the pit that once served as the outhouse. Covering the hole provided protection from accidental falls and natural odors. The pit serving the toilet became known as a cesspool.
Soon it became obvious that the cesspool couldn’t handle the extra load of household wastewater. Eventually it was discovered that by putting a watertight tank in line between the house and the cesspool, much of the waste could be removed from the flow of wastewater, trapped in the tank where it would naturally decompose. This treatment chamber became known as the septic tank. Note that the septic tank has a baffle at each end to help keep waste in the tank. The original pit remained as the part of the system that returned “clarified” wastewater to the ground. It then became known as a dry well.
Due to heavy use, poor soil conditions, age of the system or a combination of these factors, the drywell sometimes plugged up. (Wastewater still contains soaps, greases and other solids that seal the pores of all but the most porous soils.) Often a second (or third or fourth) drywell would be installed after the first to increase the soil absorption area. Note that an alert installer typically places a baffle at the outlet of the original drywell to help keep floating solids from passing into the new dry well.
Then as environmental awareness increased, it was learned that many septic systems were built too deep into the ground. There was risk of polluting drinking water by allowing wastewater to flow directly into the water table before it was properly treated by filtration through the soil. It wasn’t until 1967, for example, that the State of New Hampshire passed regulations requiring any leaching portion of a septic system (the part that sends water back into the ground) to be at least four feet above the seasonal high water table. This resulted in the switch from dry wells to leach fields, using larger “footprint” areas much shallower into the ground.
About the same time, most installers switched from the old-style steel septic tanks to the more permanent concrete type. Then as man was forced to settle on poorer ground with higher water tables, leach fields began to get pushed out of the ground to maintain separation to ground water. In many cases, pumps now have to be installed to get effluent up to these mound systems.
To save space and simplify construction of these raised systems, many new approaches have been developed including the use of plastic or concrete chambers as well as other innovations.
If you have a relatively new system that employs one of these modern innovations, chances are that you have a plan available to show you the type of system and its location.
If you have an old house with an unknown type of system, you could be anywhere on this evolutionary chart.
Using the accompanying troubleshooting tips should help you determine what type of system you have and also what is wrong with it if you are having a problem.
Locating Septic Tank
Remarkably, homeowners may not always know the location of a septic system. Older homes may have no written documentation or the former owners cannot be located. If the exact location of a septic system is unknown and the local city or county has no written records, begin looking for signs of a tank outside the house where the waste pipe exits the foundation or basement wall. Note the direction of the pipe through wall. When plumbing exits below a slab on grade, check the side of house with roof vents, especially if most of the plumbing is on that side of house. Look for a spot on the ground where snow melts first, grass turns brown or there is a slight depression or mound. Steel tanks will sometimes bounce slightly when jumped on. But be careful, steel lids rust out! Falling feet first into a septic tank is dangerous and unhealthful.
A thin steel rod with a tee handle makes a handy probe. Carefully pierce the probe into the topsoil until achieving several “hits” at the same depth. This indicates the top of the tank. A metal detector can help locate even concrete tanks and cesspool covers as they generally have steel reinforcing bars within. Another trick is to insert a snake in the house cleanout and push it until it stops. Gently sliding the snake against the inlet baffle can often send a shockwave that can be heard or felt at the ground surface by a second person. (Note that sometimes a snake can curl up within a septic tank, or particularly within a cesspool or drywell as there is no inlet baffle — making this technique useless.)
If the snake hits an obstruction but cannot be felt at the surface, remove it from the cleanout and measure its penetration into the pipe. Draw a arc on the ground at the distance of snake penetration from the house and try again with the probe along this arc. Remember that the pipe from the house may not be heading straight towards the tank.
If all else fails, locate and uncover the waste pipe where it leaves the house and again every few feet until the tank is located. Or ask a previous owner, neighbor, or septic pumper who may have serviced the system in the past.
Note: Devices are available that transmit a radio signal along a snake or from a thin ” mole” that can either be flushed or taped to the end of a snake. This signal is traced by a receiver wand as the snake is pushed through the waste pipe with uncanny accuracy.
Determine Tank Type Next, determine the type of tank:
Primary / secondary septic tank
Two or more tanks are used in some installations for better settling and detention of solids. The first tank should have fresh waste entering directly from the house. (Flush colored water or similar recognizable item down the toilet and watch it enter at inlet check point.) The second tank should have a little floating grease and scum, with some settled sludge at the bottom, Note that a septic tank always has an outlet unless it is being used as a holding tank.
Cesspool or drywell
Cesspools and drywells generally have no outlet and seldom have an inlet baffle. Liquid level could be low in a septic tank if it is rusted out (steel tank) or if center seam leaks (concrete tank). If fresh waste is present, see glossary: cesspool. If no fresh waste is present, see glossary: drywell.
Found in restaurants, inns, markets, etc.
Used if system is not gravity fed. Sometimes called a pump chamber.
Once you locate the septic tank, you may wish to have it pumped out. If water runs back into septic tank from the outlet pipe when the tank is pumped out, this is a sign that the system has failed. Possible causes include compacted soil or saturated drainfield.
Trouble- shooting Tips Check lowest fixture or drain
If the problem is septic blockage, water should back up through any drain which is below the level of the toilet when flushed. Check washing machine outlet, floor drain, bathtub, downstairs apartment, or remove cleanout plug carefully (to avoid a flood). If no backup occurs, the problem is likely with the toilet or other household plumbing only.
If the distribution box for side hill trench system is out of level, one trench may be taking all water and “failing.” Re-level pipes and block outlet to overloaded trench for several months. Or, roots could be blocking one or more pipes. Remove roots and seal joints where roots entered if possible. Note that an unlevel D box will not effect leach bed as severely, because water will find its own level through stone.
Pumps and float failures
Exercise care when handling pumps as they have 110- or 220-volt supply lines, which may not have GFIs (Ground Fault Interruptors). Some float systems (which turn pump and alarm on and off) may also contain full line voltage. Use insulating rubber gloves and follow procedures with a disinfecting hand wash for sanitation. Or call a licensed plumber if required by code.
Exercise care using snake in cleanouts or drains as some waterborne diseases can be transferred through contact. Use rubber gloves and surgical masks and follow with disinfecting wash. Stiff garden hose can sometimes be used in place of snake. Disinfect after use with chlorinated bleach and fresh water.
Usually means soil plugged due to age, overuse, underdesign, lack of maintenance, or a combination of these. Requires field replacement or rest. See: Alternating Fields.
Same reasons as above. However, drywells can sometimes be excavated around and repacked with crushed stone to create a new soil surface for absorption. Check codes.
Settling, breaking, crushing, pulling apart and backslope are installation related. Freezing, plugging at joints, corrosion or decomposition and root plugging (though also caused by poor installation) can occur later. Insulating, replacing, releveling, sealing joints, and properly backfilling will resolve most problems.
Plugging often occurs from scum buildup within baffles, roots entering through poorly sealed joints, tanks installed out-of-level or backwards, or pipes sticking into the tank too far and nearly hitting baffles, blocking waste. Correct as needed.
Locating field or drywell
Follow directions for finding a septic tank except start at the septic tank outlet rather than at the house. Snake will not hit a baffle within a drywell as there is none. It may or may not hit the side of a Distribution Box but could possibly pass through into one of the outlet pipes if pipe is in line with inlet.
Tips and Suggestions Reprinted from the National Onsite Wastewater Recycling Association (NOWRA)
Conserve water to reduce the amount of wastewater that must be treated and disposed
Repair any leaking faucets and toilets
Only discharge biodegradable wastes into system
Divert down spouts and other surface water away from your drainfield
Keep your septic tank cover accessible for tank inspections and pumping
Have your septic tank pumped regularly and checked for leaks and cracks
Call a professional when you have problems
Compost your garbage or put in trash
Use a garbage grinder
Flush sanitary napkins, tampons, disposable diapers, condoms and other non-biodegradable products into your system
Dump solvents, oils, paints, thinners, disinfectants, pesticides or poisons down the drain which can disrupt the treatment process and contaminate the groundwater
Dig in your drainfield or build anything over it
Plant anything over or drainfield except grass
Drive over your drainfield or compact the soil in any way
Example 1: Pumping frequency for a 1,000 gallon tank in a home with 4 people is at least every 3.7 years.
Example 2: Pumging frequency for 1,500 gallon tank in a home with 6 people is at least every 2.6 years.
An electromechanical device that provides audible and visual indication that the water level in a pump or holding tank is above recommended levels.
Alternating leach field
One of two or more leach fields designed to be used while the other(s) rest. They are generally fed via a manually operated diverter valve located in the line from the septic tank.
Pipe tees or partitions within a septic tank which reduce turbulence at the inlet and prevent floating grease and scum from escaping into the leaching system at the outlet. (They are usually the first part of a steel tank to rust away, leaving the leach field or drywell unprotected from excessive solids overloading.)
The original type of sewage system, often still in use in older homes. They were simply a single hole in the ground loosely blocked up with locally available materials – stone, brick, block, or railroad ties – and capped either with ties covered with a layer of old steel roofing or a cast-in-place concrete lid with a cleanout hole near the center. All household wastewater entered and the liquid portion was absorbed into the ground. When the soil plugged, a new cesspool was added. Wiser installers placed an elbow, or better still, a tee in the outlet pipe from the first cesspool, creating a baffle to hold back floating grease and scum.
In a sense, this created the first type of septic system, because the first cesspool in the line, sealed by its own demise, served as a septic tank and the subsequent tank provided a greater degree of settling and separation of soil-plugging solids and some absorption. (Owners often have the first tank pumped out to maintain system operation.)
Open-bottomed pre-cast concrete or plastic structures placed next to each other in an excavation to take the place of crushed stone in a leach field. Unlike leach fields, heavy-duty chambers can be driven over.
A removable plug in a “wye,” or a “tee” in a sewer line where a snake can be inserted to clear a blockage.
Distribution box or D-box
Usually a small square concrete box within a leach field from which all pipes lead to disperse effluent within the field. Newer boxes should be marked at the surface to protect from vehicle traffic.
Constructed identically to a cesspool and differs only in that the clarified effluent from a septic tank or the wastewater from a washing machine or other grey water may enter. Modern drywells are often pre-cast perforated rings surrounded by crushed stone to increase the absorption area. Drywells can also be used to return storm water to the ground or to relocate basement drainage water to another location above the water table.
Drywells are not commonly installed today because of laws requiring the bottom of a leaching system to be 4 feet above the seasonal high-water table.
A water supply well that is simply a hole in the ground lined with stone, brick, concrete, plastic or steel to hold its shape. The lower portion of the lining is perforated, or pierced, to let in water from the Aquifer or ground water table. The upper portion of the lining is water tight to keep surface water from entering and contaminating the well. Dug wells are often called shallow wells to differentiate them from drilled or driven wells that extend much deeper into the ground. Dug wells in our area are often a minimum of ten feet or so into the ground and a maximum of 20 to 25 feet, a practical and safe limit for machines to dig.
Shallow wells for water supply are very similar in concept to dry wells which return wastewater or rain water back to the ground. Both are designed to exchange water between the structure and the soil. The major difference is that water wells are purposely built into the ground water table and dry wells are built above the water table to keep wastewater from entering untreated.
The clearish liquid that flows out of the septic tank after the tank has “taken out the big pieces.”
Filter Fabric: Synthetic cloth-like material that is used for several different types of construction-related applications such as erosion control, road stabilization and soil separation. Can consist of either woven or non-woven fibers of varying thickness and weight. Available in 12 to 15 foot wide rolls, several hundred feet in length. Woven fabrics (usually black) resemble the modern day grain bags while non-woven fabrics can resemble a range of materials from soft felts to the stiff shiny house wrap (to which they are closely related) usually seen enveloping homes under construction.
An in-ground chamber similar to a septic tank, usually used at restaurants, markets and inns to trap grease from the kitchen wastewater before it reaches the septic tank. Unusual to find in private homes.
All liquid wastewater except for the toilet wastes (sink, shower, washer, etc.).
The part of a septic system that returns water to the ground for re-absorption. Could be a drywell, leach field, trench, chamber, etc.
A leaching system consisting of a continuous layer of crushed stone about a foot deep — usually in a rectangular layout — with perforated pipes laid level throughout to disperse effluent as evenly as possible over the entire bed.
Term often used to describe either a leach bed or leach trenches.
Built essentially like beds, except that each pipe is in its own stone-filled level trench, usually 3 feet wide. Each trench can be at a different level than the other trenches. Well suited to sloping ground.
Mound (or raised) system
A leach bed built on a mound of fine to medium-grained sand to elevate it above the seasonal high water table and/or to accommodate a system on a hillside.
A shallow, hand-dug hole saturated with water, performed as a part of a septic design to determine the soils permeability – the rate at which water is absorbed by the soil – which dictates the system size.
Pump station, pump tank: A watertight container, usually (but not always) separate from the septic tank, into which effluent flows by gravity and is then ejected by a submersible electric pump through a pressure line to the leaching system. Pump tanks often are hooked to an alarm to warn of pump failure.
Seasonal high water table
The highest elevation that groundwater reaches within the year (usually in the spring). Many states require the bottom of a leaching system to be at least 4 feet above this point.
Usually consists of a topographic survey, test pit, and percolation test plus information about the water supply and subdivision and a filing fee to the state prepared by either a licensed designer or the owner.
A watertight chamber, which all household wastewater enters for settling and anaerobic digestion of greases and solids. Original tanks were made of asphalt-coated steel. Modern tanks are made of concrete, fiberglass, or plastic. All tanks should have a set of baffles, which are critical to their operation.
Most tanks have and inspection hatch ot both the inlet and the outlet and some have a third hatch in between for pumping access. Locations of each of these should be recorded and/or marked. Steel tanks often have one round lid that covers the entire tank.
Septic tanks should be pumped every three years or so in normal operation. They should not be treated with any additives and should be protected from receiving any of the harmful chemicals used in many homes and commercial workshops. This includes disinfectants or bleaches, which can kill bacteria in the tank, and solvents, darkroom chemicals, or other materials that could pollute the water supply.
A hole dug to determine soil type, seasonal high water table, and depth to ledge. Some states require a test pit of specific depth (to determine that ledge is a minimum number of feet below bed bottom) while others require only a shallow pit to determine depth to hardpan soils. |
During World War II, weapons research led to some strange places (bouncing bombs anyone?). In addition to creating to new weapons, teams were attempting to find ways to attack without tipping off the enemy that bombs were en route. Enter: animal-borne bombs.
One prominent failure of an innovation, was the “bat bomb”. The idea was to release bats from the sky (carried by containers with parachutes) which would then roost in the nooks and crannies of a city. Once there, incendiary devices would be activated, creating fires in places normally inaccessible by typical warfare. Dr. Lytle S. Adams, creator of the concept, believed it could be an alternative to nuclear weaponry:
Think of thousands of fires breaking out simultaneously over a circle of forty miles in diameter for every bomb dropped. Japan could have been devastated, yet with small loss of life.
Unfortunately, the idea would never come to pass. By mid-1944, $2 million had been spent on developing the project and wouldn’t be ready for at least another year. In an effort to find a quick end to the war, all funding was diverted to atomic bomb research. The bat bomb would never be.
Another animal guided weapon was “Project Pigeon”, where pigeons would not carry bombs, but yet again be carried by them. American behaviorist B.F. Skinner proposed that pigeons be used for their exceptional cognitive abilities to keep a bomb heading to its target. They would be placed in the nose of a bomb along with a screen on pivots. During the journey, the screen would display footage from outside while the pigeons pecked at the target. If the bomb was off course, the pigeons would peck in the direction the bomb should move. But the researchers were never able to get enough funding…
Our problem was no one would take us seriously.
If the US Military knew their history, perhaps they would have taken the idea more seriously. In the 10th century, Olga of Kiev sent pigeons laden with sulfur to invade a city (who was responsible for her husband’s death). Once inside, her army set fire to the buildings and no water could put them out. The Drevlian city was entirely burnt, and the people killed or taken as slaves. In both ancient and modern cases, the animals would have died in the attack, not so with a certain British concept…
In 1941, the British Special Operations Executive (SOE) came up with the idea to put explosives inside rat carcasses. The dead rat would be placed in a German boiler room in the hopes that, when discovered, it would be tossed into the stoker and explode. An explosion right underneath a boiler could cause extensive damage… But their first shipment of rat carcasses was intercepted by the Germans. And as the SOE later said…
“The trouble caused to them was a much greater success to us than if the rats had actually been used.”
The discovery led the Germans to believe that rats filled with plastic exclusives had been hidden throughout the continent! A hunt began to search for any dead rats that could have explosives. It cost the Germans time, effort, and money, to search for a lot of rats that never existed. |
1. Stereotypes Obtained from Media Portrayals of Men and Women
Examine a set of television shows to see if and how the stereotypes of women and men have changed. You may focus on a particular type of program or sample across a variety of programs (e.g., drama, comedy, cartoon..etc). Then, examine one episode of 5 different programs and record the following for each character:
1. Character’s sex.
2. Character’s appearance.
3. Character’s role (housewife, doctor, detective).
4. Character’s personality traits.
5. Character’s behavior.
Conduct the same kind of experiment on a similar set of shows that appeared on television 20 or 30 years ago. Compare the two sets of stereotypes. A variation of this experiment is to review television commercials or magazine advertisements. |
by Psi Wavefunction
Reading phylogenies is a skill that can appear deceptively simple at first glance. In essence, it really is simple, but also counterintuitive to the way the mind is used to working. Far too often — and popular literature as well as science journalism is notorious for this — mistakes are made, mistakes as simple as assigning a significance to the linear order of taxa, assuming that the rightmost taxon is the most derived or the ‘highest.’ Furthermore, some terminology like paraphyletic and polyphyletic may seem confusing at first, given that we typically don’t deal with sets of evolutionarily related items in everyday life.
Imagine you have a tree like the one shown here, where A-E are various related organisms. Termini (A-E), or ‘leaves,’ are typically extant taxa, whereas the internal nodes represent shared ancestors. The first obvious thing you can see is that A and B are more closely related to each other than either is to C. D and E are both equally close to A, B, and C, given that the last common ancestor is between these two groups at node 1. For example, naked mole rats (A) and the big-genomed Amoeba proteus (B) are more closely related to each other than either is to giant tree ferns (C), and each of the three is equidistant to Haloarcula quadrata (D), a strange square archaean. (A common misconception would be to assume that tree ferns are closer to Haloarcula than are mole rats or amoebozoans, i.e., that, in our tree, D is closer to C than it is to B or A.
A group that includes all the descendants of a given ancestor along with their last common ancestor is called a clade. A clade is sometimes referred to as a monophyletic group, or maybe not. It depends on who you talk to (see below). Animals are a clade, as are eukaryotes. As are A, B, and C, along with their common ancestor 2. In contrast, a group of taxa that lack a common ancestor, such as A, B, and D, is polyphyletic. Classic examples would be ‘flying animals’ (e. g., bats, birds, and insects) or amoebae (including Amoebozoans, Heteroloboseans, Chlorarachniophytes, and many animal cell types.)
A paraphyletic group includes the common ancestor, but is missing some of the descendants, a classic example being my own kingdom of choice, Protista. Reptiles are often used to demonstrate that as well, since they gave rise to the birds and mammals that are not classified within the reptiles. (Thus, it can be argued that 'reptile' is not a valid taxonomic group.) In our alphabetic phylogeny above, the group A, C, and 2 is paraphyletic, as B is omitted. A paraphyletic group is likely, but not required, to carry some ancestral features that may have later been lost by some of its shallow-branching descendants.
Some have argued that Bacteria may be paraphyletic. In this view, Eukarya and Archaea are derived from Bacteria, rather than all three domains being distant from a long-lost common ancestor, as is commonly assumed. In this case, some Bacteria may be more closely related to Eukarya and Archaea than they are to other Bacteria. As the group ‘Bacteria’ would exclude Archaea and Eukarya, that would render it paraphyletic.
To make things fun, as usual, there are two conflicting terminologies (yes, taxonomists can’t even agree on the labels for their various types of phylogenetic groups). The conflict is over the word monophyletic. Does it mean any group that includes the last common ancestor (fig b below) or clade (as in fig a)? In an older system that is still used by some, the issue is ‘resolved’ with holophyletic meaning clade, and with monophyletic being a broader category that includes basically anything that is not polyphyletic, thus including both holophyletic and paraphyletic (see diagram). The more commonly used system these days lacks holophyletic altogether, and synonymises ‘monophyletic group’ with clade (as in fig a). Thus, one must be careful with their use of ‘monophyly’ depending on who one is talking to.
And lastly, a caveat about trees: there is no single best tree. It is not even understood whether we can theoretically know whether we have the best tree or not. In some cases, the phylogeny is solid, and you can get away with just looking at a single tree, as with vertebrates for the most part. In many other cases, the phylogeny is not yet established, and conflicting data make it wise to examine several. I’d check phylogenies done by different methods and using different genes or proteins, and compare them to find where the agreements and disagreements lie. If, like me, you are not a phylogeneticist, your best bet would be to go with a tree made by a known and trustworthy group or method such as Bayesian analysis, or at least maximum likelihood. Be careful with parsimony! Check others as well, and note where disagreements arise. Ask people who are in the know. Phylogenies change, and having a rigid tree in your mind would only cause pain and tears! And even if a gene tree is indeed the perfect tree for that gene, that may not necessarily tell the whole store of the organism in question. Genes are finicky and can cause artefacts, sometimes very consistent ones that are also consistently wrong.
Use phylogenies with caution!
For more information about reading phylogenies and avoiding common misconceptions, I recommend:
Gregory, T. Ryan (2008) Understanding Evolutionary Trees. Evolution: Education and Outreach 1(2): 121-137.
and for one approach for rooting of the tree of life:
Cavalier-Smith, T. (2006) Rooting the Tree of Life by Transition Analyses. Biol Direct 1: 19.
Yana, aka Psi Wavefunction, is an undergraduate in the Department of Botany, University of British Columbia, and the host of the blog Skeptic Wonder: Protists, Memes and Random Musings. |
Turn snack time into math time with a few simple tricks and activities to make math fun and delicious!
- Play “Roll for a Snack.” Use a small snack cup or plate. Take one die, from a pair of dice, and have your child roll it. Count out a raisin, goldfish cracker, small pretzel, or favorite small snack for each dot on the die. Keep rolling and counting until the small cup or plate is full.
- Use small pretzel or carrot sticks to form the numbers 1 to 10. When the set is complete say a number (“8,” for example.) If she can point to it, she can eat it! Continue randomly saying the remaining numbers until they are all gone.
- Use carrot sticks, pretzel sticks, or Cheerios to measure a clean, flat object. (Around the edge of a plate, the long side of a cereal box, the short edge of the table, etc.) Count, than eat the measuring tools.
- Take a deck of playing cards, with all face cards removed. Have your child turn over two cards. Add the numbers, by counting the hearts, clubs, spades, or diamonds, and count out that many Cheerios, raisins, grapes, etc.
- Make patterns on a small plate. For example, put a strawberry, grape, banana slice, and blueberry, than repeat the sequence until the small plate is full and ready to eat.
These fun activities will help your child with number sense, adding and subtracting, more than-less than, and measuring skills in an interactive and meaningful way. |
- Students will learn how the federal government estimates the poverty line.
- Students will calculate alternatives to the federal estimate, in small groups or individually.
- Students will discuss the possible effects of underestimating the poverty line.
- Copies of the How Much Income is Really Required to Make Ends Meet? (PDF) for small groups or individual students (The handout can be adapted for younger students.)
The federal poverty line is used to determine individuals' and families' eligibility for particular kinds of aid and services and also is an important benchmark that helps the nation know how many Americans are struggling financially each year, and over time.
In 2012, the federal government set the poverty line for a family of four at $23,050. The figure is based on food costs — the government identifies how much it should cost to feed a family of four for one year and then multiplies that number by three. The formula has been used for decades.
What it fails to capture is this: In today's America, food expenses represent just one-fifth of the average household budget, not a third. Other costs — housing, health care, childcare and transportation — typically eat up larger portions of a family's budget.
Ask students if they have ever heard of the federal poverty line. If so, invite them to share what they know. Review the above framework and objectives with students.
Working in diverse small groups, or individually, ask students to complete the handout (PDF). Be sure to "walk the room" and help students or groups who are struggling with particular portions of the word problem. The answers, along with mathematical solutions, appear below.
So, how much income is really required to make ends meet?
1: Adjusting the federal estimate
- Based on the government's estimate, what is the annual cost of food for a family of four? ($23,050/3=$7,683.30)
- If annual food costs represent one-fifth of a family's expenses, how much money does a family need to purchase food and everything else it needs?
$7,683.30 = x
x= ($7,683.30x100)/20 = $38,416.50
The adjusted federal estimate is .
2: Using other benchmarks
What if the government used other factors —childcare or housing costs, for example — to calculate the poverty line, instead of food costs?
Typical rental (2 bedroom) costs in the United States today run $949 per month (Source), and a family with one four-year-old and one school-aged child pays an average of $1,066 per month in childcare costs (Source)
- If the federal estimate was based on childcare cost, not food, the poverty line would be .
- If the federal estimate was based on housing costs, not food, the poverty line would be .
($1,066 x 12 months x 3= $38,376)
($949 x 12 months x 3= $34,164)
So, how severely does the federal poverty line underestimate income a family really needs to make ends meet?
|Federal Estimate||Adjusted Federal Estimate||Childcare Benchmark||Housing Benchmark|
1. Judged against the adjusted federal estimate, the federal estimate underestimates the income necessary by .
($38,416.50 -$23,050=$15,366.50)/$38,416.50)=.39 (40%)
2. Judged against an estimate based on childcare costs, the federal estimate underestimates the income necessary by .
3. Judged against an estimate based on housing costs, the federal estimate underestimates the income necessary by .
Remind students that the federal poverty line serves two primary purposes: 1) to establish eligibility (or ineligibility) of individuals and families for certain kinds of aid and services, and 2) to help the nation gauge the number of Americans who are struggling financially, in a given year and over time.
As a whole class, discuss:
- Based on what we've learned, how likely is it that a family making $25,000 a year — an income above the poverty line — would struggle financially? Why? (Very likely — we've learned that it could take anywhere from $23,050 to $38,416 for a family of four to make ends meet.)
- What are some possible effects of the government underestimating the poverty line? (Answers will vary, but may include: people who need aid and services won't get them; the government won't really know how many people are struggling; it paints a healthier economic picture than there really is.)
- Why might the government be hesitant to change its formula for calculating the poverty line? (Answers will vary, but may include: we're running a deficit, and don't have money to provide more aid; it's the way they've always done it; changing it would mean changing lots of government programs, and that would be cumbersome for a big bureaucracy like the federal government.)
- Do you think the federal government should change the way it calculates the poverty line? Why? (Answers will vary.)
Statistical data in this activity is drawn from Ending Poverty, eds. John Edwards, Marion Crain and Arne L. Kalleberg, The New Press, 2007, Income, Expenditures, & Wealth, U.S. Census Bureau, 2004, at www.census.gov and the Bureau of Labor Statistics at www.bls.gov.
To help students understand that working people with all kinds of jobs can struggle financially or experience poverty, allow time for them to research the median wages of different professions. A good place to start is the Bureau of Labor Statistics. Samples include:
Profession: Median Salary
School cook: $24,230
Retail salesperson: $25,130
Preschool teacher: $30,150
City bus driver: $37,440
News reporter: $27,600
Realtor: $51,170 (includes commissions)
Note: These salary numbers assume full-time employment (40 hours per week) for 52 weeks per year.
(Source: All of the salaries are from this link: http://www.bls.gov/oes/current/oes_nat.htm) |
Scientists are using volcanic gases to understand how volcanoes work, and as the basis of a hazard-warning forecast system.
When the USA’s Mount St Helens erupted in 1980, just two months after showing signs of reawakening, its blast was equivalent to 1,600 times the energy of the atomic bomb dropped on Hiroshima. It remains the most economically destructive volcanic event in the USA’s history.
When Eyjafjallajӧkull erupted in 2010 in Iceland, the ash cloud it emitted stranded around half of the world’s air traffic, with an estimated global economic cost of US $5 billion. Recently, magma has been on the move again, this time under and beyond Iceland’s Bárðarbunga volcano.
Volcanoes are the vents through which our planet exhales. Yet, not all volcanoes experience spectacular releases of energy, or even erupt at all: of the 500 or so volcanoes that are currently active worldwide, 20 might be expected to erupt in any one year. But, when volcanoes do erupt, they can cause almost total destruction in the immediate vicinity and the ash clouds they release can affect areas thousands of kilometres away.
Fortunately, the ability to monitor volcanoes has dramatically improved in recent years, thanks in part to the work of scientists like Dr Marie Edmonds in Cambridge’s Department of Earth Sciences.
Studying the behaviour of volcanoes such as Soufrière Hills in Montserrat, which caused the displacement of two-thirds of the island’s population (over 8,000 people) when it erupted in 1995, Edmonds and colleagues have accumulated huge datasets on everything from the type and quantity of gas belched from volcanoes, to the bulging and deformation of the volcanoes’ shape, to the altitude and quantity of ash thrown up into the stratosphere.
“About 600 million people live close enough to an active volcano to have their lives disturbed or threatened, so there’s a clear need for hazard assessment,” Edmonds said. “We knew that gas monitoring data could be essential for this, but monitoring depended on the use of cumbersome instruments that had to be driven around the crater’s edge.”
In the early 2000s, with funding from the Natural Environment Research Council (NERC), she and Dr Clive Oppenheimer from the Department of Geography developed a new gas sensor – one that is cheap, miniaturised and can be left long term on the volcano, relaying the data back to the observatory by radio modem. Today, sensors like these are used by scientists worldwide for monitoring volcanoes.
“Previous studies had shown that changes in the emission rate of gases correlated with volcanic activity but, because we have such a long dataset, we began to see another pattern emerging,” said Edmonds. “What you see at the volcano surface is really only the end part of the story.”
The intense temperatures and pressures deep in the earth find release through fissures and cracks, which carry dissolved gases such as carbon dioxide (CO2), sulphur dioxide (SO2), hydrogen chloride (HCl) and steam up through the mantle to the crust.
As the magma begins its journey to the surface, the pressure lowers and dissolved gases form tiny bubbles, which start to expand. Close to the surface, the expansion can be so great that it fuels an explosive burst of lava, shooting volcanic gases tens of kilometres into the earth’s atmosphere.
Because each species of gas dissolves at different pressures, the scientists can measure what is released at the surface and use this to work out the depth at which the gases separated from the magma to form bubbles. “The gases are like messages that tell you how the volcano is ‘plumbed’ and what shape that plumbing is in,” explained Edmonds.
“One intriguing pattern to emerge in Soufrière Hills is that the time series for the magma eruption and that for the SO2 gas eruption are completely unrelated to one another. There have been three big episodes of lava extrusion in the past 15 years and, although HCl flux seems to be a proxy for eruption rate, SO2 emission is uncoupled from what is happening in the eruption. We think the SO2 flux is telling us about something much deeper in the system.”
When these results were combined with a study of the rocks spewed from the volcano, Edmonds and colleagues began to piece together an idea of the physics and chemistry happening within.
They believe that a hot magnesium- and iron-rich ‘mafic’ magma is intruding from depth into the shallower magma chamber where it meets a silica- and crystal-rich ‘andesite’ magma that forms the main part of the eruption. However, it is the gas-rich mafic magma that Edmonds and colleagues believe triggers and fuels the eruption, and it is this that surface SO2 levels are a proxy for.
“This is far from the traditional view of how a magma chamber works,” said Edmonds. “It was thought to be balloon-like but now we think it’s vertically protracted, with different types of magma at different levels.”
“The surface SO2 is telling us about long-scale processes, of the order of months to years,” explained Edmonds. “Even though there may be no evidence of lava at the summit, if SO2 is still outgassing then there’s potential for the eruption to resume. We can to an extent use it to forecast a volcanic eruption.”
Recently, Edmonds and colleagues joined forces with researchers at other universities to understand how best to monitor volcanoes and earthquakes in two new NERC-funded projects. The £2.8 million Centre for the Observation and Modelling of Earthquakes, Volcanoes and Tectonics (COMET+) programme run by the University of Leeds will provide new understanding of geohazards to underpin national risk capabilities; and the £3.7 million RiftVolc project will create a long-range eruptive forecast for the largely uncharted volcanoes in the East African Rift Valley.
For Soufrière Hills, monitoring is providing a key input to the risk assessments by the UK government’s Scientific Advisory Committee for Montserrat, a British Overseas Territory. “All the surface signs indicate the volcanic activity is decaying away but, from the SO2 emissions, the volcano remains active at depth. We think there’s a huge magma reservoir – tens of cubic kilometres beneath the island, much bigger than the island itself. We know from looking at older ash deposits on the island that this volcano is capable of much larger eruptions than we have seen in recent years, perhaps even as large as the Mount St Helens blast.”
Note : The above story is based on materials provided by University of Cambridge |
Bringing Creativity in Curriculum
Curriculum is the form of academic disciplines wherein each of the disciplines are unearthed in a defined manner.
Reviewing the formation of a curriculum over a period of time will help understand how the curriculum was perceived, what it is and how it was subjected to changes to give new dimensions to learning.
The curriculum evolution theories have changed over eras and these changes have led to the change of the facet of global education.
Curriculum serves as the reference point for dissemination of knowledge from the basic to higher and advanced levels of education. Ideas, beliefs, value systems, thinking patterns and knowledge of subjects are imparted to learners via curriculum.
Ann Parker said ” Effective teachers don’t cover the curriculum, they uncover it”
The curriculum landscape is enormous and all encompassing. The principles for the development of curricula have been shared by various pioneers which have acted as the foundation stones on which the present day academic structures are assembled.
Although the development and content of curriculums have evolved and changed with time. However, the core structure has remained the same.
Some of the famous curriculum landscapes are:
⇒ Saber-tooth Curriculum (Benjamin, 1939)
⇒ The Child & the Curriculum (Dewey, 1956)
⇒ Realms of Curriculum Meaning (Phenix,1964)
⇒ Concept & Aims of Education (Peters & Hirst, 1970)
⇒ Pedagogy of the Oppressed (Freire, 1970)
⇒ Classic & Romantic Curriculum Landscape (Jenkins, 1972)
⇒ School & Cultural Development (Skilbeck, 1973)
⇒ Curricula as Socially Organized Knowledge (Young, 1973)
⇒ Ideology & the Curriculum (Inglis, 1974)
⇒ Relevance of Educational Practice & Piaget’s Theory (Kamii & Constance, 1974)
These landscapes provide rich and diverse viewpoints for grasping the curriculum drafting process which is in fancy, in the current scenario.
Few things to be considered while drafting the curriculum are:
⇒ The Learnings to be Imparted
⇒ Logical Structure/ Flow of the Curriculum
⇒ Priorities to be addressed in terms of Principles & Interests of Education
⇒ Ensuring the Balancing of the Curriculum Objectives
⇒ Tenure in which the Curriculum Completion is Intended
⇒ Adaptability of the Curriculum to Formative/Summative Assessments
How can Curriculum be made Creative
A curriculum composed of enriching content which has been developed after a thorough scientific research on the foundations of inquiry, exploration and discovery: one which uses innovation to enhance the academic and learning outcomes manifolds is called an effectively Creative Curriculum.
Creative Curriculum not only ignites interest in students towards classroom-based learning but also leads to the overall development of their skills and personality.
The gap in clear definition and understanding of the learning objectives as well as the expected learning outcomes that would lead to the upliftment of academic standards at an institution is bridged by the Creative Curriculum.
Historically, the need for the development of a creative curriculum can be attributed to a lot of popular theories, such as:
⇒ Socio-Cultural Theory (Vygotsky, 1934)
⇒ Cognitive Development Theory (Piaget,1936)
⇒ Human Needs Theory (Maslow, 1943)
⇒ Psycho-Social Development Theory (Erikson, 1950)
⇒ Emotional Development Theory (Greenspan, 1979)
⇒ Child Development Theory (Brazelton, 1992)
Development of the Creative Curriculum
The guiding principles for the development of a creative curriculum for the learners which also help us understand the need for such an innovation are listed below. These help learners to have:
⇒ Meaningful Interactions & Good Rapport Building with peers and seniors.
⇒ Socio-Emotional Competence
⇒ Purposeful and Constructive Actions
⇒ Caring Attitude towards the Environment
⇒ Form Effective Collaborations for their overall development and learning.
Several educational institutions have evolved their offered academic programmes by following the creative curriculum principles as conceived by The Fisher Early Learning Center in 2020. These principles are:
⇒ Developing & Maintaining a Trusting Relationship with each child, Implementing Nurturing and Trust Building routines.
⇒ Providing Responsive Care to meet the needs of the children.
⇒ Providing Learning Experiences to help children feel competent by offering them choices and challenges.
⇒ Helping children Express their Emotions aptly and providing them opportunities for pretend play.
The curriculum designing needs a paradigm shift, away from rote subject knowledge and towards the development of cognitive and critical thinking skills. (Bharucha, 2021)
The set of skills and qualities required in learners to excel in the present era which should be focused on by the educators during the curriculum development are:
⇒ Analytical Ability
⇒ Logical Thinking
⇒ Scientific Evidence Evaluation
⇒ Constructive Feedback Giving
⇒ Avoiding Bias
⇒ Root Cause Analysis
⇒ Questioning Preconceptions (even our own )
⇒ Verifying Credibility of Sources
⇒ Accepting Contrary Views
⇒ Civil Debates
⇒ Quick Thinking
⇒ Solution Focused Thinking
An innovative curriculum should help today’s learners develop all the above abilities.
Qualities of a Creative Curriculum Educator
⇒ Observant and Decisive
⇒ Provides Formative Feedback
⇒ Qualitative in Progress Evaluation
⇒ Promotes Continuous Learning
⇒ Meaningfully Interprets Pedagogical Knowledge
⇒ Uses Real Life and Relatable Examples
⇒ Develops Problem Solving Attitude
⇒ Promotes Interaction over Monologues
⇒ Promotes Critical Thinking & Lifelong Learning
The Creative Curriculum will be the much needed disruption in the field of modern day education that is now evolving greatly and is set to evolve even further.
Various regulatory bodies will now need to promote creativity in curriculum to meet and cater to the diverse and growing developmental needs of today’s learners.
Use of creativity in curriculum designing right from the primary school level to the higher education level will go a long way in altering the existing traditional curriculums which focus on ancient and static knowledge to newer ones which lead to the overall development of the learners while keeping them abreast and ready for the challenges of the present day and the future that lie ahead. |
Fuel poverty is type of financial hardship in which households struggle to afford to heat their homes to comfortable, safe temperatures. Fuel poverty results from one or several factors – low income, high fuel costs, and energy inefficient homes. Fuel poor households worry about their energy bills, rack up debts to their suppliers, turn down thermostats and cope with cold living conditions.
Cold rooms aren’t just uncomfortable: they can exacerbate health problems and contribute to premature deaths. The government runs several social schemes to alleviate fuel poverty, including winter fuel payments and the warm home discount scheme.
To understand fuel poverty, and how these schemes can help if you’re experiencing it, read on:
Compare energy suppliers and save money in five minutes!
In the past, a household was legally considered to be fuel poor if it had to spend more than 10% of its income on fuel to heat living spaces to an ‘adequate standard.’ An adequate standard is defined as 21°C in the main living room and 18°C in other inhabited rooms, including bedrooms.
This is still the definition that applies in Northern Ireland, Scotland, and Wales.
However, since 2013, England has used a different definition of fuel poverty—the so-called ‘low income, high cost’ model.
Under this definition, a household is categorised as fuel poor if it meets both of the following conditions:
This definition has been criticised because it is a relative measure of fuel poverty and may underestimate the true extent. For example, it has failed to account for rising fuel costs and falling incomes over the last decade, which have driven up rates of fuel poverty in the devolved nations. In contrast, under the new definition, rates of official fuel poverty have remained flat in England.
In 2016, the last year for which full data is available, 11.1% of households – 2.55 million – in England were fuel poor. That’s an increase from the 11% that experienced fuel poverty in 2015.
In Scotland, which uses the older definition of fuel poverty, 649,000 households (26.5% of the total) were categorised as fuel poor in 2016.
In Wales, there were 291,000 households – 23% of all households – in fuel poverty. In Northern Ireland, 160,000 experienced fuel poverty, 22% of the total.
According to data from the fuel poverty charity National Energy Action (NEA), 82.1% of households in fuel poverty are considered vulnerable, meaning they have children, elderly members, or someone with a long-term illness or disability. Among household types, single parent families have the highest prevalence of fuel poverty, at 26.4%.
Fuel poverty is more common in rural areas than in urban areas. 14% of populations in rural villages, hamlets, and isolated dwellings are in fuel poverty, due to a combination of inefficient homes and the use of more expensive heating, including electricity.
Additionally, people from ethnic minority backgrounds were more likely to experience fuel poverty than white households, with 17% in fuel poverty compared to 10%.
Fuel poverty is also more prevalent among those who rent in the private rented sector, with 19.4% of rented households experiencing fuel poverty. They make up more than a third (35.4%) of all households in fuel poverty. In contrast, 13.8% of social tenants and 7.7% of owner-occupiers are fuel poor.
Both definitions of fuel poverty allow the government and charities to assess not just the prevalence of fuel poverty but also its depth – the difference between a household’s average gas and electric bill and what it would have to be for the household to no longer be classified as fuel poor. This figure is known as the household’s fuel poverty gap.
In 2016, the average fuel poverty gap in England was £326 a year. This decreased 4.4% in real terms from 2015. The cumulative fuel poverty gap in England was £832 million.
However, rising energy costs led the Department for Business, Energy, and Industrial Strategy (BEIS) to forecast that the average fuel poverty gap would increase by 9% between 2016 and 2018, to £357.
The fuel poverty gap for those in rural areas was found to be double that in urban areas, at nearly £600 a year.
Fuel poverty is caused by one, or several compounding factors:
Cold temperatures can be hazardous for our health and even fatal, especially among the old, the very young, and those in poor health. Cold interior temperatures increase the likelihood, and severity, of colds and influenza, and exacerbate existing health conditions like asthma and other respiratory issues. Cold rooms also have a detrimental impact on mental health. Overall, fuel poverty is estimated to cost the National Health Service £3.6 million every day.
The UK experiences an average of 32,000 excess deaths each year in the winter. Of those, a tenth can directly be attributed to fuel poverty, according to the End Fuel Poverty Coalition.
The government runs a number of schemes to alleviate fuel poverty, delivering both immediate relief on energy bills for vulnerable populations and long-term savings by increasing the energy efficiency of homes. These include: |
To build a sustainable playground for your school, you need to have the right resources that will allow you to reuse equipment and improve the quality of the area to become more eco-friendly. School playgrounds have changed significantly in the last decade due to the rising desire to see more environmentally friendly playgrounds for schools. An excellent way to learn how to build a sustainable playground is to look at our list of ways to make your school playground more eco-friendly.
Use More Recycling Bins
Do more for your school’s playground by using receptacles that have two openings: one for waste and one for recyclables. Teach your students the right way to recycle by providing recycling bins for different purposes. For example, you can use one recycling bin specifically for bottles and cans. Additionally, you may be able to take a field trip to a nearby recycling plant to dispose of the recyclables you collected on the playground. This will help teach your students to keep their areas clean, both at school and at home.
Use Recycled Material
The material you use to build your playground doesn’t need to be brand new. Secondhand materials are often better because they are gently used or have been donated to help give new life to older equipment. Reusing materials in this way helps improve the quality and lifespan of the playground equipment. When looking for materials, steer away from bright colored plastics, as they do more harm than good. Instead, opt for something like a wooden playground. They’re making a comeback because they are more environmentally friendly and are a preferred choice over plastic playground equipment.
Use Nature as a Starting Point
Nature is all around us, and if schools want to avoid intruding on a habitat or causing any damage to the surrounding ecosystem, they should work with nature, not against it. For instance, if there are trees, bushes, and shrubs nearby, build within the perimeter of these plants to ensure the surrounding environment can stay as natural as possible. Also, when creating your school’s playground, consider using energy-efficient lightbulbs or solar-powered lights to help save on energy costs and reduce your energy output.
The history of playgrounds has drastically changed over the years. Today, schools continue to improve their playground to reflect higher-quality materials and a more eco-friendly approach to life. By following the ways to make your school playground more eco-friendly, you’re on your way to providing your school with the right tools to become eco-friendly. In the process, you’ll inspire others to do the same. |
HOUSTON — The ground beneath us and the trees over our heads have provided birds and mammals a place to call home for millions of years. Ecosystems change throughout time, sometimes leading to the death of animals.
"One of the biggest drivers is loss of habitat,” said Lydia Beaudrot, assistant professor in biosciences at Rice University.
She's been working alongside research scientist Evan Fricke who has found that plants are becoming less resilient to a changing climate because of the decrease in animals spreading their seeds.
“In our changing climate, that means that the habitat suitable for a given plant species are basically moving in space," Fricke said. "So a place that has the right combination of temperature and precipitation now will basically be in a different location ten, 20, 30 years in the future. In order for plant species to actually make it to those locations, they need to move. But of course, plants individually can't move, but their seeds can.”
Fewer plants and trees mean less carbon can be sucked out of the atmosphere.
“We know that a lot of the tree species that these large-bodied animals disperse the seeds for have more density in their wood, and so they're able to store more carbon,” Beaudrot said.
Researchers say it’s a vicious cycle that leads to even more loss of habitat. However, all hope is not lost.
“There are a lot of efforts right now for increasing the amount of protected areas," Beaudrot said. "So, if you think about national parks or other kinds of places, reserves that are set aside to maintain biodiversity. Within Houston, we do have Memorial Park, which is a small park in the middle of the city, and there's actually been some camera-trapping work going on there. And it turns out there's some really neat animals that are in the middle of the city."
Footage has been captured by Houston Wilderness showing all the animals that come out at night.
“We need to understand what wildlife we have in the region and how we have," said Deborah January-Bevers, president of Houston Wilderness. "And then we need to understand where habitat is being degraded, where those wildlife are struggling to be able to have the habitat they need to thrive and survive.”
She says parks like this are of course everywhere in our country and while they help wildlife, humans have also disrupted the natural flow of wildlife. Houston has a solution that it thinks other cities can learn from.
“There is a major thoroughfare, memorial drive, that runs right down the middle of it," January-Bevers said. "And so it has separated the two pieces of the park for many, many, many years.”
So, the city is building a landscape bridge connecting one side to the other offering a space to restore native plants and animals to the area.
“It’s really only been going on about 18 months," January-Bevers said. "They've been able to pull this laden bridge together really quickly.”
It’s constructions like this, that Fricke and Beaudrot say can make a difference in cities all over the world.
“Supporting the connectivity of our habitats allows the animals that are there to reach their full potential in terms of seed dispersal,” Fricke said. |
We are searching data for your request:
Upon completion, a link will appear to access the found materials.
It is also called Schist, or snail disease. She is teased by a worm called schistosomiasis. Worms live in the veins of the intestines and can cause diarrhea, weight loss, belly pain, which increases a lot of volume (water belly), and problems in various organs of the body.
The eggs of the schistosome come out along with the feces of the infected person. If there is no sump or sewerage, they can reach fresh water (lakes, ponds or streams, river banks, etc.). In water, eggs give rise to small larvae (animals other than adult worms) called miracidia. Larvae penetrate a type of snail called planorbid. Inside the snail, they reproduce and become other larvae, the cercariae, which come out of the snail and swim freely in the water.
THE cercaria can penetrate through the skin to people who use water from lakes, ponds, streams and other places to bathe, wash, work, fish or other activities.
In addition to treating the patient with medication, a sewer system must be installed to prevent eggs from reaching the water. People also need to have access to good quality water and to be informed about ways of transmitting the disease.
It is also necessary to combat the snail that transmits schistosomiasis with chemicals and the creation of fish that feed on the snail, such as tilapia, tambaqui and piau. These fish can be eaten by people without risk of contamination. |
Objective / Rules - Fill the grid with squares containing Blue(X) and White(O). - A 3-In-A-Row of the same colour/letter is not allowed. - Each row and column has an equal number of Blue(X) and White(O) squares.
Help Read the help/walkthrough page on 3-In-A-Row puzzles for a more detailed explanation and a walkthrough.
Mouse Usage Note: only when supported by your computer/device/browser/etc. Left-click = Blank >> Blue(X) >> White(O) >> Blank. Right-click = Blank >> White(O) >> Blue(X) >> Blank.
Checking If you click 'Check' the system will check for incorrect squares. If 'Show mistakes when checking' is checked they will be marked with a "!". If 'Show counts' is checked then the system will show the current count of Blue(X) and White(O) squares.
Uniqueness Each puzzle has exactly one solution, which can be found using logic alone and no guesses are ever required. If you think you've found another solution, then please double check the rules. |
When asked why feedback is important, I received varied answers. A few of the responses include, “It provides an opportunity to learn from mistakes.”, “Feedback helps people stay on track.” and “Feedback provides clarification.” The goal of most leaders is to develop a feedback-culture in which people are comfortable giving and receiving feedback.
One of the best definitions of feedback is provided by the Merriam-Webster dictionary. Here, feedback is defined as the “evaluative or corrective information about an action, event, or process.” Using this definition, we should all want to give and receive feedback with the goal of improvement. The problem occurs when people use feedback as a way to find fault with others.
Feedback should not be something that is only thought about once a year, during annual reviews, but given and received frequently. The problem is that many do not know how to give or receive feedback.
An effective method of providing feedback is by following the steps below:
- Be Positive – Whenever possible, start with identifying something the person is doing well.
- Don’t Make It Personal – Focus on behavior, not the person. Keep emotions in check when providing feedback on poor performance.
- Choose an Appropriate Time – Feedback should occur away from others and as close to the time of the action as possible.
- Be Specific – Provide examples when providing feedback. Telling someone they did a good job but not providing specifics is not useful feedback. The same is true when trying to provide feedback on poor performance.
- Include the Recipient – Ask the recipient if there are any questions. Engage in dialogue, if needed.
- Follow-up – When feedback includes poor performance, follow-up and when actions are correct, reinforce the positive behavior.
When you’re on the receiving end of feedback, it can be difficult as your task is to hear and accept the feedback. You should be listening to understand. As the recipient of feedback, you should:
- Listen – Listen to the feedback without interruption. There will be time later to ask questions if clarification is needed.
- Check Your Responses – Make sure you are not using body language to put up emotional barriers.
- Be Open – No one is perfect. If you receive feedback with this attitude, you will be more receptive.
- Understand the Feedback – Make sure you understand what is being said to you.
- Reflect – Thoughtfully reflect on the feedback and decide on what action should be taken.
Everyone needs feedback to grow. Feedback is a gift of oneself to another. It is a gift of time and energy, departing wisdom or observations to help someone else. Feedback allows you to use your experience to help and empower others. Like with any other gift, feedback provides value for both the provider and receiver. Give the gift of feedback today. |
Asthma is a chronic condition that causes inflammation and narrowing of the bronchial tubes, the passageways that allow air to enter and leave the lungs. If people with asthma are exposed to a situation that modifies their regular breathing patterns, the symptoms can become critical.
There are two types of asthma: allergic and non-allergic. Its symptoms include coughing, shortness of breath, chest tightness, wheezing. The airways in a person with asthma are very sensitive and react to many things. Contact with these triggers leads to asthma symptoms. One of the most essential parts of asthma control is to recognize your triggers and then avoid them when possible. The only thing that you should not avoid is exercise. Medicines before exercise can allow you to stay active and avoid asthma symptoms.
Asthma complications include:
- Signs and symptoms that disturb sleep or work.
- Sick days from work or school due to asthma flare-ups.
- Permanent narrowing of the bronchial tubes affecting your normal breathing pattern.
- Emergency room visits and hospitalizations for serious asthma attacks.
- Side effects from long-term use of some of the medicines stabilizes critical asthma
Actual treatment makes a big difference in staying away from both short-term and long-term complications caused due to asthma.
Complications that affects lifestyle:
Improper controlling of asthma can adversely affect your quality of life. The condition can result in:
- Absence from work
- Stress, anxiety, and depression.
If you think that your asthma is really affecting your quality of life, consult with your doctor. Your personal asthma care plan might require being reviewed to control the condition in better ways.
In some cases, asthma can become a cause of a number of serious respiratory complications, including:
- a collapse of part or all of the lung
- respiratory failure, in which the levels of oxygen in the blood become critically low, or the levels of carbon dioxide become majorly high
- status asthmaticus, that is severe asthma attacks that are unresponsive to treatment.
All of these complications are life-threatening and requires medical treatment.
You along with your doctor can make a step-by-step plan for living with asthma condition:
- Follow your asthma action plan- With your doctor and health care team, look for a detailed plan for taking asthma medications and preventing an attack. Then strictly follow your plan. Asthma is a continuous condition that seeks regular checks and treatment. Maintaining control of your treatment will make you feel better for your life in general.
- Get vaccinated for influenza and pneumonia-Staying active with vaccinations will prevent the flu and pneumonia from touching asthma flare-ups.
- Identify and avoid asthma triggers- A number of outdoor allergies starting from pollen and mold to cold air and air pollution can hit an asthma attack. Recognize what worsens your asthma, and take action to avoid such triggers.
- Monitor your breathing- You should learn to recognize warning symbols of an upcoming attack, such as slight coughing, wheezing or shortness of breath. But, since your lung functioning might reduce before you notice any signs or symptoms, consistently measure and track your peak airflow with a home peak flow meter.
- Identify and treat attacks early-If you respond spontaneously, chances of having a severe attack are quite less. You also won’t require much of medications to control your symptoms. When your peak flow measurements reduce and alert you for an oncoming attack, take your medication as instructed and immediately stop any activity that may hit the attack. If your symptoms are yet not improving, get medical help as suggested in your action plan.
- Take your medication as prescribed-Since your asthma seems to be improving, don’t modify anything without consulting your doctor. It’s better to bring your medications with you to each doctor visit so that he can crosscheck that you’re taking your asthma medicines correctly and also using the right dose.
- Pay attention for increasing quick-relief inhaler use- If you see yourself dependent on your quick-relief inhaler, like albuterol, your asthma isn’t under control. Schedule a visit to your doctor about adjusting your treatment.
Asthma treatment is aimed at controlling airway inflammation and avoiding known allergy triggers, like pet dander and pollen. The main objective is to regularize normal breathing, prevent asthma attacks and restore daily activities. Regular asthma treatment assists in preventing symptoms and asthma inhalers are the preferred ways because the drug can directly reach into the lungs in smaller doses with least side effects. Some asthma cure medicines are given in pill or injection form, too. Take care of your health by following all the required steps and stay happy! |
What Is a Diaphragm on a Microscope?
A diaphragm on a microscope is the piece that enables the user to adjust the amount of light that is focused under the specimen being observed. A diaphragm is typically found on higher-power microscopes versus less expensive or toy models.
The diaphragm is located directly under the stage or platform where user places the specimen or slide. The diaphragm disc, sometimes called an iris, has tiny holes in it that let varying degrees of light in under the specimen. By opening the diaphragm, an item that at first appears too dark is easier to observe. Adjusting the diaphragm can also create contrast for better viewing transparent specimens. |
DEVELOPMENTAL SKILLS: VISUAL PERCEPTION - The Inspired Treehouse
When most people think about visual skills, they think about how well a child can see, or visual acuity. But there are a whole slew of other skills – visual perceptual skills – that help kids make sense of what they see.
Disabilities | Center for Parent Information and Resources
Children with visual impairments can certainly learn and do learn well, but they lack the easy access to visual learning that sighted children have. The enormous amount of learning that takes place via vision must now be achieved using other senses and methods.
Yes You Can! How to encourage your blind child without pushing too hard | WonderBaby.org
Have you ever thought about all the things your child *can't* do because of their disability? It can be depressing, but one way to get over that is to figure out how to make inaccessible activities accessible, but it's also important to remember not to push your child into activities they simply don't find interesting.
Easy Activity to Help Kids with Reading, Word Searches, & Visual Scanning - The OT Toolbox
Visual Scanning Activity for fine motor skills and visual scanning in so many functional tasks like reading, word searches, puzzles. This visual motor activity creates a fidget toy to help sensory seekers with fidgeting, too.
Visual Tube - Your Therapy Source
visual perceptual activity, bilateral coordination activity, eye-hand coordination activity, ocular motor control activity, upper extremity stability activity, proprioceptive activity, motor planning activity, and you can keep on analyzing this activity like an occupational therapist... |
Welcome to The 12 Days of Christmas Counting by French Hens (C) Math Worksheet from the Christmas Math Worksheets Page at Math-Drills.com. This math worksheet was created on 2015-12-07 and has been viewed 2 times this week and 9 times this month. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math.
Teachers can use math worksheets as tests, practice assignments or teaching tools (for example in group work, for scaffolding or in a learning center). Parents can work with their children to give them extra practice, to help them learn a new math skill or to keep their skills fresh over school breaks. Students can use math worksheets to master a math skill through practice, in a study group or for peer tutoring.
Use the buttons below to print, open, or download the PDF version of the 12 Days of Christmas Counting by French Hens (C) math worksheet. The size of the PDF file is 57638 bytes. Preview images of the first and second (if there is one) pages are shown. If there are more versions of this worksheet, the other versions will be available below the preview images. For more like this, use the search bar to look for some or all of these keywords: math, Christmas, number, skip, counting, 12, Days, 3.
The Print button initiates your browser's print dialog. The Open button opens the complete PDF file in a new browser tab. The Download button initiates a download of the PDF math worksheet. Teacher versions include both the question page and the answer key. Student versions, if present, include only the question page. |
Foliar leaf spots, shoot blight, and stem cankers.
- Ash (Fraxinus)
- Crabapple (Malus)
- Flowering pear (Pyrus)
- Mountain laurel (Kalmia)
- Lilac (Syringae)
- Rhododendron (Rhododendron)
This disease is found wherever bare-root trees are grown, although development is favored by cool and misty conditions, such as foggy areas along waterways. Rhododendrons grown in containers and fields are also susceptible.
Rain splash from infested standing water spreads the pathogen. Plant tissue wounded by harvest or pruning is at increased risk of disease.
This pathogen survives as spores in soil and in fallen leaves; spores become active during prolonged periods of standing water. |
The amygdala is a small, almond-shaped structure in the brain that plays a key role in the processing of emotions, particularly fear and anxiety. It is part of the limbic system, which is a group of brain structures that are involved in emotional processing and behavior. The amygdala is involved in the initial emotional response to a stimulus, and it helps to activate the body’s fight-or-flight response in response to perceived threats. It is also involved in the formation of long-term emotional memories. Dysfunction of the amygdala has been linked to a number of mental health disorders, including anxiety disorders and post-traumatic stress disorder (PTSD). |
Tagged: first world problems
Age: Teens +
Target language: Vocabulary related to societal problems, so/such e.g. There is such high employment that …/There is so much bureaucracy that…
Time: 30 minutes+
Preparation: None essential
1: Present and practice vocabulary for societal problems such as high unemployment, bureaucracy, corruption, crime rate.
2: Teach the phrases “there is so much ___ that …” and “there is such high ___ that …”
3: Divide the class into groups and have each group choose a country which they represent. Students should know something about this country, so picking obscure countries is not advised. However, it is a good idea to prevent students from picking their own country. Ideally students will pick first world countries, although this is not strictly necessary.
4: Tell students that although many people think their chosen country is rich, it faces a number of challenges too. Consequently, their country is appealing to the UN for international aid. The UN only has the resources to help one of these nations in need. Each nation must therefore make its case, and the most convincing will get the money.
5: Give students a set time to prepare the case for their country. They should consider societal problems you have presented and use the structure that you presented earlier. Creative groups will be able to play with the language here in coming up with first world problems.
6: Have students present their case and take a vote on the most deserving nation.
7: Provide feedback on language errors and upgrades.
S: I know my country of Switzerland projects the image of a strong, wealthy European country. However, this could not be further from the truth. We have a number of problems, and we desperately need this aid to help us. Firstly, there is such a housing shortage in the cities that people are forced to live in beautiful alpine villages. It’s terrible. Secondly, there is so much bureaucracy that you even have to fill in a form when you buy milk.
1: If students lack world knowledge, you could provide information about countries for them to use.
2. This sequence could also be applied to problems in cities, or with a bit of imagination, global problems with the students presenting to an interplanetary organisation to save Earth!
Students imagine that their country got the funding to fix the problems. Not only were the problems fixed, but this aid has really turned the country around. Students write a letter thanking the delegates and outlining the positive changes that have been made.
Like this activity? Find more here. |
The Convention is a human rights agreement developed by the United Nations Organisation (UN) and adopted by the UN in 2006.
The Convention doesn’t contain any new rights.
The Convention is based on rights and freedoms originally expressed in something called the Universal Declaration of Human Rights which was developed by the UN just after World War 2.
The main point of the Convention is to explain the rights of persons affected by disability in a way which helps governments to organise society so that persons with disabilities are included equally in all aspects of life and so that, for example, persons with disabilities have equal opportunities to; gain an education, go to work, take part in political and social life, and to participate in sporting and cultural activities.
The Convention does not define disability as such and is careful to explain that “disability is an evolving concept”. The Convention rejects charity and medical models of disability and instead explains that disability results from a person’s impairments interacting with physical and environmental barriers.
The Convention is all about removing those barriers.
The Convention is one of the most successful human rights agreements ever developed. More than 185 nations have signed to say they agree with the principles of the Convention. Almost 99% of the population of the world live in countries whose governments have signed the Convention.
Most nations that have signed the Convention have also “ratified” the Convention. “Ratified” means that a nation has made a legal promise to keep to the principles of the Convention and agrees to report to the UN every four years on the progress that has been made.
Ratification gives additional impetus on governments to realise (achieve) the Convention. Ratification is important because it gives persons with disabilities confidence that their governments are committed to realising and respecting their rights.
Because Guernsey is not a member nation of the UN, it cannot sign or ratify the Convention in its own right. Instead, Guernsey can request that the UK’s ratification be extended to the island. In 2013, the States of Guernsey resolved to seek extension of the UK’s ratification “at the earliest appropriate opportunity”
The Convention is based on the following basic principles:
- Respect for inherent dignity, individual autonomy, including the freedom to make one’s own choices and independence of persons
- Full and effective participation and inclusion in society
- Respect for difference and acceptance of persons with disabilities as part of human diversity and humanity
- Equality of opportunity
- Equality between men and women
- Respect for the evolving capacities of children with disabilities and respect for the right of children with disabilities to preserve their identities.
Most Articles (clauses) of the Convention can be realised progressively (over time) but there are a few things that governments must take immediate steps to achieve: for example, eliminating discrimination and raising awareness about disability and about the rights of all those affected by disability. |
Many people think of spring as the season for tornadoes, and generally they are right. The strongest tornadoes usually reach their peak of occurrence in May. About the third week of May, Oklahoma and the Texas Panhandle is, on an average, the place and time where the world's most violent weather phenomenon is at its fiercest and most common. Back in February, we saw — tragically — that Florida and the Gulf Coast have an early tornado season, though usually not as early as in this year of a strong El Niño. Likewise, June and July bring the threat of tornadoes farther north — to eastern Colorado, for instance, where smaller tornadoes are numerous in early June, and even to the Northeast U.S., where a second peak of frequency for the year comes in July and early August.
Most of us have heard that tornadoes in the Northeast are usually far less powerful than their cousins in the Plains. But what's interesting is that scientists and storm-chasers have learned so much about severe weather in the past few decades that they can now give separate terms to a number of different varieties of wind vortex, including some of the weaker tornadoes. This isn't just a matter of classifying tornadoes from F0 (weakest) to F5 (strongest) on the famous Fujita scale, which relies on the type of damage caused. Different tornadoes and other whirlwinds have different means of formation.
Two kinds of vortex which are visibly different than tornadoes have been known and named for a long time: waterspouts and dust devils. You might think that a waterspout is just a tornado over water. Actually, the true waterspout forms over the water and is typically much weaker than a tornado. In cases where the vortex forms from a severe thunderstorm over land and moves onward to pass over water, it can have a ferocity far exceeding the true waterspout. As for dust devils, these whirls occasionally can be big and strong enough to be dangerous, but their genesis is from localized heating. Their source of energy is limited. The source of most strong tornadoes is the energy of a major weather system concentrated by a super-cell thunderstorm and a special rotating region of these storms called a mesocyclone. The mesocyclone might be typically five miles in diameter and produce a tornado up to a mile wide.
Waterspouts and dust devils look very different and occur in very different environments than tornadoes. But some of the new subclasses of tornado would not immediately he differentiated by the layperson. Some of these terms for them are storm chaser slang, but they make good sense. A landspout is a small weak tornado which typically doesn't arise from a mesocyclone or a supercell but instead from less severe thunderstorms and other convective clouds. Most of the many tornadoes in eastern Colorado in June are landspouts. A gustnado is a weak tornado which is formed from the gust front, the line of winds which races out ahead of a thunderstorm.
A cold-air funnel is another generally weak tornado or funnel cloud, which has yet another means of production — relatively cool, comparatively stable conditions. The only major vortices I've ever seen were a group of cold-air funnels in North Dakota. A meteorologist friend and I rode bicycles to within about a mile of the nearest of these funnels before it dissipated. We weren't taking much of a risk, though in rare cases such a vortex might generate winds in roughly the 70 to 100 mph range.
By the way, a funnel cloud is a condensation funnel which is not in contact with the ground. If its end ever touches the ground, it is then classified as a tornado. Not every tornado has a visible funnel, however. In some cases, an observer may see only a whirl of debris down on the surface, but the phenomenon can still be classified as a tornado. It can still be part of a violent storm, however, and may or may not be something fun to chase after on your bicycle. |
Expressions known as polynomials are used widely in algebra. Applications of these expressions are essential to many careers, including economists, engineers, and scientists. In this chapter, we will find out what polynomials are and how to manipulate them through basic mathematical operations.
- 10.1: Add and Subtract Polynomials
- In this section, we will work with polynomials that have only one variable in each term. The degree of a polynomial and the degree of its terms are determined by the exponents of the variable. Working with polynomials is easier when you list the terms in descending order of degrees. When a polynomial is written this way, it is said to be in standard form. Adding and subtracting polynomials can be thought of as just adding and subtracting like terms.
- 10.2: Use Multiplication Properties of Exponents (Part 1)
- In this section, we will begin working with variable expressions containing exponents. Remember that an exponent indicates repeated multiplication of the same quantity. You have seen that when you combine like terms by adding and subtracting, you need to have the same base with the same exponent. But when you multiply and divide, the exponents may be different, and sometimes the bases may be different, too. We’ll derive the properties of exponents by looking for patterns in several examples.
- 10.3: Use Multiplication Properties of Exponents (Part 2)
- All the exponent properties hold true for any real numbers, but right now we will only use whole number exponents. The product property of exponents allows us to multiply expressions with like bases by adding their exponents together. The power property of exponents states that to raise a power to a power, multiply the exponents. Finally, the product to a power property of exponents describes how raising a product to a power is accomplished by raising each factor to that power.
- 10.4: Multiply Polynomials (Part 1)
- In this section, we will begin multiplying polynomials with degree one, two, and/or three. Just like there are different ways to represent multiplication of numbers, there are several methods that can be used to multiply a polynomial by another polynomial. The Distributive Property is the first method that you have already encountered and used to find the product of any two polynomials.
- 10.5: Multiply Polynomials (Part 2)
- The FOIL method is usually the quickest method for multiplying two binomials, but it works only for binomials. When you multiply a binomial by a binomial you get four terms. Sometimes you can combine like terms to get a trinomial, but sometimes there are no like terms to combine. Another method that works for all polynomials is the Vertical Method. It is very much like the method you use to multiply whole numbers.
- 10.6: Divide Monomials (Part 1)
- In this section, we will look at the exponent properties for division. A special case of the Quotient Property is when the exponents of the numerator and denominator are equal. It leads us to the definition of the zero exponent, which states that if a is a non-zero number, then a^0 = 1. Any nonzero number raised to the zero power is 1. The quotient to a power property of exponents states that to raise a fraction to a power, you raise the numerator and denominator to that power.
- 10.7: Divide Monomials (Part 2)
- We have now seen all the properties of exponents. We'll use them to divide monomials. Later, you'll use them to divide polynomials. When we divide monomials with more than one variable, we write one fraction for each variable. Once you become familiar with the process and have practiced it step by step several times, you may be able to simplify a fraction in one step.
- 10.8: Integer Exponents and Scientific Notation (Part 1)
- The negative exponent tells us to re-write the expression by taking the reciprocal of the base and then changing the sign of the exponent. Any expression that has negative exponents is not considered to be in simplest form. We will use the definition of a negative exponent and other properties of exponents to write an expression with only positive exponents.
- 10.9: Integer Exponents and Scientific Notation (Part 2)
- When a number is written as a product of two numbers, where the first factor is a number greater than or equal to one but less than 10, and the second factor is a power of 10 written in exponential form, it is said to be in scientific notation. It is customary to use × as the multiplication sign, even though we avoid using this sign elsewhere in algebra. Scientific notation is a useful way of writing very large or very small numbers. It is used often in the sciences to make calculations easier.
- 10.10: Introduction to Factoring Polynomials
- Earlier we multiplied factors together to get a product. Now, we will be reversing this process; we will start with a product and then break it down into its factors. Splitting a product into factors is called factoring. In The Language of Algebra we factored numbers to find the least common multiple (LCM) of two or more numbers. Now we will factor expressions and find the greatest common factor of two or more expressions. The method we use is similar to what we used to find the LCM.
Figure 10.1 - The paths of rockets are calculated using polynomials. (credit: NASA, Public Domain)
Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/[email protected]." |
Two galaxies swing past each other in a cosmic dance choreographed by gravity, 300m light years from Earth in the constellation of Leo.
This image, taken by the Hubble Space Telescope, reveals in unprecedented detail the bright regions of star formation, interstellar gas clouds and prominent dust arms that spiral out from the galaxies' centres.
The larger galaxy on the right is seen nearly face-on, with a giant arm of stars, dust and gas reaching out and around its smaller neighbour, which is viewed edge-on.
The shapes of both galaxies have been distorted by their gravitational interaction with one another.
The pair are known collectively as Arp 87, and are just one celestial coupling among hundreds of interacting and merging galaxies known in the nearby universe.
Arp 87 was first discovered and catalogued by the astronomer Halton Arp in the 1970s, and was described in Arp's Atlas of Peculiar Galaxies.
The Hubble image, a composite of red, blue, green and infra-red exposures, was taken using the telescope's wide field planetary camera 2.
It shows a corkscrewing bridge of material spanning from one galaxy to the other, suggesting stars and gas are being drawn from the larger galaxy into the gravitational pull of the smaller one.
Interacting galaxies are often hosts to the highest levels of star formation found anywhere in the nearby universe. |
Babies just a few days old can already identify a rhythmic pattern, and their brains show surprise when the music skips a beat, according to a new study. Researchers played recordings that used high-hat cymbals, snare drums, and bass drums to make a funky little beat while monitoring the infants‘ brain activity with non-invasive electroencephalogram brain scanners, and found that newborns respond to a skipped beat in the same way that adults do.
The ability to follow a beat is called beat induction. Neither chimpanzees nor bonobos — our closest primate relatives — are capable of beat induction, which is considered both a uniquely human trait and a cognitive building block of music. Researchers have debated whether this is inborn or learned during the few first months of life, calibrated by the rocking arms and lullabies of parents [Wired News]. While the researchers who conducted the new study say their findings are evidence that beat induction in innate, others argue that the newborns could have already learned to identify rhythmic patterns by listening to their mothers’ heartbeats while in the womb.
In the study, reported in the Proceedings of the National Academy of Sciences, 14 sleeping newborns were exposed to repeated recordings of a rock drum accompaniment pattern and to four variations of that pattern. Babies were usually exposed to patterns with a downbeat. On rare occasions, the downbeat was missing. Of the 306 consecutive drum sequences presented to newborns, one in 10 lacked a downbeat. Each newborn wore scalp electrodes during the study. Drum sequences missing a downbeat elicited a signature, split-second brain response that has been linked in adults to the violation of one’s expectations [Science News].
Lead researcher István Winkler says the findings suggest that a rhythmic sensibility is very important for infants’ brain development, and says it may help them respond to the rhythmic and repetitive baby talk that lays the foundation of all future language learning. Therefore, evolution may have favored brains wired to rock for learning purposes, said Winkler, and “music went along for the ride” [LiveScience].
Image: flickr / One*mandarino |
How To Help Your Children Eat Healthy
Health eating begins during infancy and continues into adulthood. Teaching your child how to eat a healthy diet now builds a solid foundation for them when they become an adult.
Tips To Promote Healthy Eating
- For the first four to six months of life, feed your baby only breast milk or formula.
- Babies are ready for solid food when: their birth weight has doubled, they can control their head and neck, they can sit up with support, they show interest in the food you eat and they can show you that they’re full by turning their head away or refusing to open their mouth.
- Don’t serve honey to children under 1 year of age. Infant botulism is a very serious nerve and muscle illness that can be caused by eating honey.
- Variety is the key. If you offer a large variety of healthy foods, your child will grow to enjoy many of them.
- For toddlers, offer foods with a variety of colors and textures. Cut food into interesting shapes and arrange them attractively on the plate.
- Warm food is more appealing to toddlers than hot food.
- Avoid forcing children to eat unwanted food.
- Place small amounts of food on their plate. They can ask for more.
- Serve snacks at the same time every day and space them so they won’t interfere with meals.
- Healthy snack foods include: fruits and fruit juices, raw vegetables served with fat-free dressing, cereal, yogurt, cheese and soup.
- If you don’t want your child to eat junk food, keep it out of the house.
- Choose cookies and other desserts such as fig bars and oatmeal raisin cookies that are low in fat and sugar and contain nutrients.
- Respect your child’s ability to decide how much to eat.
- Be a good role model for nutritious eating.
- Avoid using food as a reward. This may give your child the message that they can reward themselves with food, and it may lead to a lifetime of using food as consolation.
- Encourage your children to eat only when they are hungry. |
Black bears can be found from northern Alaska east across Canada to Labrador and Newfoundland, and south through much of Alaska, virtually all of Canada, and most of the U.S. into central Mexico (Nayarit and Tamaulipas states). (Lariviere, 2001)
Throughout their range, prime black bear habitat is characterized by relatively inaccessible terrain, thick understory vegetation, and abundant sources of food in the form of shrub or tree-borne soft or hard mast. In the southwest, prime black bear habitat is restricted to vegetated, mountainous areas ranging from 900 to 3,000 m in elevation. Habitats consist mostly of chaparral and pinyon-juniper woodland sites. Bears occasionally move out of the chaparral into more open sites and feed on prickly pear cactus. There are at least two distinct, prime habitat types in the Southeast. Black bears in the southern Appalachian Mountains survive in a predominantly oak- hickory and mixed mesophytic forest. In the coastal areas of the southeast, bears inhabit a mixture of flatwoods, bays, and swampy hardwood sites. In the northeast, prime habitat consists of a forest canopy of hardwoods such as beech, maple, and birch, and coniferous species. Swampy habitat areas are mainly white cedar. Corn crops and oak-hickory mast are also common sources of food in some sections of the northeast; small, thick swampy areas provide excellent refuge cover. Along the Pacific coast, redwood, sitka spruce, and hemlocks predominate as overstory cover. Within these forest types are early successional areas important for black bears, such as brushfields, wet and dry meadows, high tidelands, riparian areas and a variety of mast-producing hardwood species. The spruce-fir forest dominates much of the range of the black bear in the Rockies. Important nonforested areas are wet meadows, riparian areas, avalanche chutes, roadsites, burns, sidehill parks, and subalpine ridgetops. (Lariviere, 2001)
Black bears are usually black in color, particularly in eastern North America. They usually have a pale muzzle which contrasts with their darker fur and may sometimes have a white chest spot. Western populations are usually lighter in color, being more often brown, cinnamon, or blonde. Some populations in coastal British Columbia and Alaska are creamy white or bluish gray. Total body length in males ranges from 1400 to 2000 mm, and from 1200 to 1600 mm in females. Tail length ranges from 80 to 140 mm. Males weigh between 47 and 409 kg, females weigh between 39 and 236 kg. The distance between the canine teeth is about 4.5 to 5 cm.
Black bears are distinguished from grizzly or brown bears (Ursus arctos) by their longer, less heavily furred ears, smaller shoulder humps, and a convex, rather than concave, profile. (Lariviere, 2001)
Males and females meet temporarily for mating when females are in estrus. Male home ranges overlap with those of several females. (Lariviere, 2001)
The sexes coexist briefly during the mating season, which generally peaks from June to mid-July. Females remain in estrus throughout the season until they mate. They usually give birth every other year, but sometimes wait 3 or 4 years. Pregnancy generally lasts about 220 days, but this includes a delayed implantation. The fertilized eggs are not implanted in the uterus until the autumn, and embryonic development occurs only in the last 10 weeks of pregnancy. Births occur mainly in January and February, commonly while the female is hibernating. The number of young per litter ranges from one to five and is usually two or three. At birth the young weigh 200 to 450 grams each, the smallest young relative to adult size of any placental mammal. They are born naked and blind. Black bear cubs remain in the den with their torpid mother and nurse throughout the winter. When the family emerges in the spring the cubs weigh between 2 and 5 kg. They are ususally weaned at around 6 to 8 months of age, but remain with the mother and den with her during their second winter of life, until they are about 17 months old. At this time the female is coming into estrus and forces the young out of her territory. They may weigh between 7 and 49 kg at this point, depending on food supplies.
Females reach sexual maturity at from 2 to 9 years old, and have cubs every other year after maturing. Males reach sexual maturity at 3 to 4 years old but continue to grow until they are 10 to 12 years old, at which point they are large enough to dominate younger bears without fighting. (Lariviere, 2001)
Black bear cubs remain in the den with their sleeping mother and nurse throughout the winter. When the family emerges in the spring the cubs weigh between 2 and 5 kg. They are ususally weaned at around 6 to 8 months of age, but remain with the mother and den with her during their second winter of life, until they are about 17 months old. At this time the mother forces the young out of her territory. They may weigh between 7 and 49 kg at this point, depending on food supplies. Black bear mothers care for their young and teach them necessary life skills throughout the time that their cubs are with them.
Male black bears do not contribute directly to their offspring but do indirectly by preventing new males from moving into the area. This makes it less likely for the young or mother to encounter an aggressive male or have to compete with new bears for food. (Lariviere, 2001)
Black bears can live to 30 years in the wild but most often live for only about 10, mostly because of encounters with humans. More than 90% of black bear deaths after the age of 18 months are the result of gunshots, trapping, motor vehicle accidents, or other interactions with humans. (Lariviere, 2001)
Black bears are generally crepuscular, although breeding and feeding activities may alter this pattern seasonally. Where human food of garbage is available, individuals may become distinctly diurnal (on roadsides) or nocturnal (in campgrounds). Nuisance activities are usually associated with sources of artificial food and the very opportunistic feeding behaviors of black bears. During periods of inactivity, black bears utilize bed sites in forest habitat; these sites generally consist of a simple shallow depression in the forest leaf litter. Black bears are normally solitary animals except for female groups (adult female and cubs), breeding pairs in summer, and congregations at feeding sites. In areas where food sources are aggregated, large numbers of bears congregate and form social hierarchies, including non-related animals of the same sex that travel and play together.
The highly evolved family behavioral relationships probably are the result of the slow maturation of cubs and the high degree of learning associated with obtaining food and navigating through large territories. Black bears possess a high level of intelligence and exhibit a high degree of curiosity and exploratory behaviors. Although black bears are generally characterized as shy and secretive animals toward humans, they exhibit a much wider array of intraspecific and interspecific behaviors than originally thought. Black bears have extraordinary navigational abilities which are poorly understood. (Lariviere, 2001)
Territories are established by adult females during the summer. Temporal spacing is exhibited by individuals at other times of the year and is likely maintained through a dominance hierarchy system. Males establish territories that are large enough to obtain food and overlap with the ranges of several females. (Lariviere, 2001)
Black bears communicate with body and facial expressions, sounds, touch, and through scent marking. Scent marks advertise territory boundaries to other bears. Black bears have a keen sense of smell. (Lariviere, 2001)
Throughout their range in North America, black bears consume primarily grasses and forbs in spring, soft mast in the form of shrub and tree-borne fruits in summer, and a mixture of hard and soft mast in fall. However, the availability of different food types varies regionally. Only a small portion of the diet of bears consists of animal matter, and then primarily in the form of colonial insects and beetles. Most vertebrates are consumed in the form of carrion. Black bears are not active predators and feed on vertebrates only if the opportunity exists.
The diet of black bears is high in carbohydrates and low in proteins and fats. Consequently, they generally prefer foods with high protein or fat content, thus their propensity for the food and garbage of people. Bears feeding on a protein-rich food source show significant weight gains and enhanced fecundity. Spring, after black bears emerge from winter dens, is a period of relative food scarcity. Bears tend to lose weight during this period and continue to subsist partly off of body fat stored during the preceding fall. They take advantage of any succulent and protein- rich foods available; however, these are not typically in sufficient quantity to maintain body weight. As summer approaches, a variety of berry crops become available. Summer is generally a period of abundant and diverse foods for black bears, enabling them to recover from the energy deficits of winter and spring. Black bears accumulate large fat reserves during the fall, primarily from fruits, nuts, and acorns. (Lariviere, 2001)
Black bear cubs may be at risk of being killed by large predators, such as wolves and mountain lions. However, most black bears that are killed, both young and adults, are killed by humans. (Lariviere, 2001)
Black bears are important in ecosystems because of their effects on populations of insects and fruits. They help to disperse the seeds of the plants they eat and consume large numbers of colonial insects and moth larvae. They sometimes take small and large mammals as prey, such as rabbits and deer. (Lariviere, 2001)
People have intensively hunted, for trophy value and for various products, including hides for clothes or rugs, and meat and fat for food. In most of the states and provinces occupied by black bears, they are treated as game animals, subject to regulated hunting. An estimated 30,000 individuals are killed annually in North America. Relatively few skins go to market now, as regulations sometimes forbid commerce and there is no great demand.
Medical research on the metabolic pathways that black bears use to survive long period of torpor is yielding new insight into treatments for kidney failure, gallstones, severe burns, and other illnesses. (Lariviere, 2001)
Black bears have been known to occasionally raid livestock, though losses to bears are negligible. Bears sometimes damage cornfields, and berry and honey production. Some bears have become troublesome around camps and cabins if food is left in their reach. Black bears have severely injured and sometimes even killed campers or travelers who feed them. However, the danger associated with black bears is sometimes overstated, fewer than 36 human deaths resulted from black bear encounters in the 20th century. Black bears are generally very timid and, unlike grizzly bear females, black bear mothers with cubs are unlikely to attack people. When black bear mothers confront humans, they typically send their cubs up a tree and retreat or bluff. People who live in or visit areas with black bears should be aware of the appropriate precautions for avoiding black bear encounters. (Lariviere, 2001; Northwest Territories: Resources, Wildlife, and Economic Development Division, August 27, 2001)
Black bears once lived throughout most of North America, but hunting and agriculture drove them into heavily forested areas. Residual populations survive over much of the range in sparsely populated wooded regions and under protection in national parks. They are numerous and thriving, but continue to face threats regionally due to habitat destruction and hunting. Black bears appear in CITES appendix II. (Lariviere, 2001)
Tanya Dewey (author), Animal Diversity Web.
Christine Kronk (author), University of Michigan-Ann Arbor.
living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico.
uses sound to communicate
young are born in a relatively underdeveloped state; they are unable to feed or care for themselves or locomote independently for a period of time after birth/hatching. In birds, naked and helpless after hatching.
having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria.
flesh of dead animals.
uses smells or other chemicals to communicate
active at dawn and dusk
having markings, coloration, shapes, or other features that cause an animal to be camouflaged in its natural environment; being difficult to see or otherwise detect.
in mammals, a condition in which a fertilized egg reaches the uterus but delays its implantation in the uterine lining, sometimes for several months.
animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds.
union of egg and spermatozoan
forest biomes are dominated by trees, otherwise forest biomes can vary widely in amount of precipitation and seasonality.
having a body temperature that fluctuates with that of the immediate environment; having no mechanism or a poorly developed mechanism for regulating internal body temperature.
the state that some animals enter during winter in which normal physiological processes are significantly reduced, thus lowering the animal's energy requirements. The act or condition of passing winter in a torpid or resting state, typically involving the abandonment of homoiothermy in mammals.
offspring are produced in more than one group (litters, clutches, etc.) and across multiple seasons (or other periods hospitable to reproduction). Iteroparous animals must, by definition, survive over multiple seasons (or periodic condition changes).
having the capacity to move from one place to another.
the area in which the animal is naturally found, the region in which it is endemic.
generally wanders from place to place, usually within a well-defined range.
an animal that mainly eats all kinds of things, including plants and animals
having more than one female as a mate at one time
Referring to something living or located adjacent to a waterbody (usually, but not always, a river or stream).
scrub forests develop in areas that experience dry seasons.
breeding is confined to a particular season
remains in the same area
reproduction that includes combining the genetic contribution of two individuals, a male and a female
uses touch to communicate
that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle).
Living on the ground.
defends an area within the home range, occupied by a single animals or group of animals of the same species and held through overt defense, display, or advertisement
uses sight to communicate
reproduction in which fertilization and development take place within the female body and the developing embryo derives nourishment from the female.
Academic American Encyclopedia. 1994. Grolier Incorporated. Danbury, CT.
Collier's Encyclopedia. 1993. Collier Incorporated. New York, NY.
Encyclopedia Americana. 1994. Grolier Incorporated. Danbury, CT.
The Carnivores. Ewer, R.F. 1973. Cornell University Press. Ithaca, NY.
Walker's Mammals of the World, 4th Ed. Nowak, Ronald, M. and John L. Paradiso. 1983. Johns Hopkins University Press, Baltimore, MD.
Wild Mammals of North America. Chapman, Joseph, A. and George A. Feldhamer. 1982.Johns Hopkins University Press. Baltimore, MD.
World Book Encyclopedia. 1994. World Book Incorporated. Chicago, IL.
Lariviere, S. 2001. Ursus americanus. Mammalian Species, 647: 1-11. Accessed September 02, 2006 at http://www.science.smith.edu/departments/Biology/VHAYSSEN/msi/default.html.
Northwest Territories: Resources, Wildlife, and Economic Development Division, August 27, 2001. "Encountering Bears" (On-line). Accessed August 28, 2002 at http://www.nwtwildlife.rwed.gov.nt.ca/Publications/safetyinbearcountry/encounters.htm. |
On the 2nd of May, Generation Science staff led P6/7M in a “Power from the People” interactive workshop. During the workshop, they explored what electricity is and how it is produced.
P6/7M became electric scientists to discover how they could produce electricity from the movement of their bodies. Pupils explored the principles of electricity using circuits, flowing electrons, magnets and capacitors. As a final challenge, the children worked in small groups to assemble their own generator and produce electricity from their own movement. This was a great way to continue our learning about energy and electricity.
Can your children explain what is happening in each photo? |
Radioactive material is quite scary. Even for someone who really does not know what it is, just the word radioactive is scary because most people understand the connotations of the word.
Radioactive matter is unstable energy. It causes extreme effects when exposed to a living being. It can kill plants, animals and people in an instant or at minimum causes serious health issues and diseases. It some serious stuff which is why radioactive pollution prevention is a serious issue.
Radioactive Pollution Prevention and Its Causes
Radioactive pollution is caused by anything that releases radioactive matter. This includes nuclear power plants which through operation produces radioactive waste. The power itself is not radioactive, which is a common mistaken way of thinking. It is the waste that is radioactive and harmful. Nuclear power is actually very clean and harmless.
Pollution can also come from transportation in which radioactive materials are used and uranium mining which releases radioactive matter during the process itself.
Three Types of Radioactive Pollution
The three types of radioactive pollution range from mild to sever in their effects. While no form of radioactive exposure is safe, these levels help you to see the hazards of radioactive pollution and what can happen from exposure.
Alpha matter is the lowest level. It is easily blocked. Should it be the form of radioactive pollution, protection against it is not needed as it can be blocked by the skin alone.
Beta matter is the next level of radioactive pollution. It can penetrate the skin, so protection is needed. Some glass and metal can protect from beta radioactive matter.
The highest and most dangerous level is gamma matter. Gamma matter can not be blocked. It may be possible to block it through a very large layer of concrete, but it is highly dangerous and any exposure would lead to serious health effects and possibly death.
Prevention of Radioactive Pollution
Radioactive pollution prevention is an obvious important task. Radioactive pollution prevention simply can not fall to the wayside because protection against this harmful substance is not an option but rather mandatory.
Exposure to radioactivity materials is severe and holds great consequences. So, radioactive pollution prevention is the only way to go. Once it occurs there is no going back.
The most effective form of radioactive pollution prevention is legislation. The government carefully monitors any activity with radioactive materials. Special licenses are needed to work with it, too. The whole industry is kept a tight watch on to make sure there is no accidental release of radioactive pollution.
It is only through prevention that safety can be assured. The regulations and other standards are put in place to make sure there is no way for radioactive matter to be released. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.