content
stringlengths
275
370k
A telecommunications system is a collection of nodes and links to enable communication over a distance. Telecommunication utilizes electrical signals or electromagnetic waves. Examples of telecommunications systems are the telephone network, the radio broadcasting system, computer networks and the Internet. The nodes in the system are the devices we use to communicate with, such as a telephone or a computer. In its most fundamental form, a telecommunication system includes a transmitter to take information and convert it to a signal, a transmission medium to carry the signal and a receiver to take the signal and convert it back into usable information. This applies to any communication system, whether it uses computers or not. Most modern day telecommunications systems are best described in terms of a network. This includes the basic elements listed above but also the infrastructure and controls needed to support the system. There are six basic components to a telecommunications network. 1. Input and output devices, also referred to as ‘terminals’ These provide the starting and stopping points of all communication. A telephone is an example of a terminal. In computer networks, these devices are commonly referred to as ‘nodes’ and consist of computer and peripheral devices. 2. Telecommunication channels, which transmit and receive data This includes various types of cables and wireless radio frequencies. 3. Telecommunication processors, which provide a number of control and support functions. For example, in many systems, data needs to be converted from analog to digital and back. 4. Control software, which is responsible for controlling the functionality and activities of the network. 5. Messages representing the actual data that is being transmitted. In the case of a telephone network, the messages would consist of audio as well as data. 6. Protocols specify how each type of telecommunication systems handle the messages For example, GSM and 3G are protocols for mobile phone communications, and TCP/IP is a protocol for communications over the Internet. While early telecommunication systems were built without computers, almost all systems we use today are computerized in some way. A discussion of modern communications systems is not complete without mentioning 5G. In the initial rollout, 5G speeds have ranged from ~50 Mbit/s to over 2 gigabit, and the speed is expected to grow to even 100Gbps, 100x faster than 4g. Latency will also be greatly reduced. There are different kinds of latency. “Air latency” in equipment shipping in 2019 is 8–12 milliseconds. The latency to the server must be added to the “air latency”. Verizon reports the latency on its 5G early deployment is 30 ms. The main 5G telecom carriers are Verizon, T Mobile, and Sprint (the latter two which the justice department has signed off on merging). Sub 6 GHz network base station cells and antennas are prominent across the U.S. Deployments also have taken place in South Korea, including 38,000 base stations. See below for more info on actual 5G speeds across different modulations (source: https://www.digitaltrends.com/mobile/5g-vs-4g/) : A computer network is a system of computers and peripheral devices that are connected electronically. These connected computers can communicate with each other, which means that they can share information. Each computer has its own network address, so it can be uniquely identified among all the computers in a network. Computer networks are able to carry different types of data and support different applications. Computers are connected using a number of different types of communication channels. These include both wired and wireless connections. Wired connections consist of an actual physical cable, such as copper wire or fiber optics. Wireless connections do not use a physical cable but transfer data using waves at a particular part of the electromagnetic spectrum. Why do we need a computer network? Transferring files between individual computers can be accomplished using physical media, such as DVDs or external hard drives, but a computer network makes it possible to transfer data between computers without having to use physical media. Some of the advantages of computer networks include: - File sharing - Internet connection sharing - Sharing of peripheral devices - Improved cost efficiency - Increased storage capacity The network itself can also carry out tasks that are difficult for any single computer to do. These network services have become increasingly important as many different types of devices are connected to each other.
Since young children often show creative mathematical thinking when solving problems, we are inviting teachers of 3 to 5 year olds to share children’s responses to NRICH activities, as teachers of older children currently do. Young children may reveal their thinking in different ways when solving problems, by what they say, do or record. So examples may be in the form of written observations and annotations of photos, children’s recording or even videos (but please avoid gdocs format). Written observations of children’s comments and actions What young children say can be very revealing: for instance, in the article The Value of Two Ruth Trundley tells the story of taking three year old Alice to the swimming pool: We establish that there are four toilets in the changing room. When we get into the changing room Alice looks at the door for one toilet and asks, “Where are the other three?” This question shows a lot more about Alice’s understanding of the ‘three plus one-ness’ of four, than any recording she might have done at that age. Photos and Video Sometimes photos capture children’s problem solving more effectively: the boys’ satisfaction in their solution to the problem of parking 10 cars in two car parks is evident here. Of course you need to check you have permission to send us photos. The activity Maths story time presents the problem of pirate panda who refuses to share his gold coins with the other pirates. The problem gets interesting when another character comes along and there is a remainder. Three to five year olds will readily give advice! Often we need to observe children’s actions when solving problems to understand their thinking; for instance, when sharing, do they ‘deal’ items one by one or give everyone a specific amount straightaway? A nursery child’s creative solution to the problem is shown in this video: https://www.youtube.com/watch?v=1zguAec3AaE You need to see the solution to appreciate it! Video can be the best way of capturing the problem solving processes children go through. For instance, children can be seen refining strategies by using trial and improvement. Videos can also form a useful basis for conversations with children: replaying and discussing a video can help them to reflect on their thinking and learning. Of course, parents or carers must give permission before videos can be shared. Older children’s creative solutions may be shown by their recording, especially when accompanied by explanations. One reception child invented the solution of the pirates taking turns to have the remainder: ‘Pirate Sally gets it first, that’s why she has 5’. She also offers an alternative solution: ‘If there were two more coins then everyone could have the same’. This is good evidence for the EYFS Characteristics of Effective Learning: Creating and thinking critically- choosing ways to do things and finding new ways (Standards & Testing Agency, 2014). It also shows an effective combination of verbal explanation and pictures to support a number sentence. Not many children of this age will write standard equations. Young children often show creativity in the way that they combine conventions of writing, drawing and using symbols to express their thinking. In the example below the child uses speech bubbles to show pictorially the amount each pirate receives, with the remainder given to a non-pirate friend: ‘He’s saying 4 coins each, that’s why I’ve given them speech bubbles. The pirate without a hat isn’t a pirate. They’ve given the other coin to their friend. The number 13 is for all the coins’. In this inventive recording the solution is graphically displayed and only the total is expressed in numerals. The teacher’s notes are crucial in providing the child’s explanation and together these records give insights into the child’s thinking. More examples of young children’s creative mathematical expression are in Janine Davenall’s article, Young Children's Mathematical Recording , which shows children using arrows and hands to record subtraction problems, as well as a variety of emergent number sentences. Younger children can also record, given coloured pens and large sheets of paper! Number rhymes provide a ready source of simple problems: see the NRICH task Number Rhymes for more ideas. Below are some examples of nursery children showing their solutions to the problem of how many of the five little speckled frogs might be on the log or in the pool. These also show creative use of a combination of drawing, symbols and mark making to express the children’s thinking. Please send your examples of children’s thinking to [email protected] EYFS maths pedagogy work group, Bucks, Berks & Oxon Hub 2017; Fiona O’Shea, Milton CoE Primary School; Helen Thouless Rachel Fleming, Headington Prep School. Standards and Testing Agency (2014) Early Years Foundation Stage Handbook
Instead of flash drives, the latest generation of smart phones uses materials that change physical states, or phases, to store and retrieve data faster, in less space and with more energy efficiency. When hit with a pulse of electricity or optical light, these materials switch between glassy and crystalline states that represent the 0s and 1s of the binary code used to store information. Now scientists have discovered how those phase changes occur on an atomic level. Researchers from European XFEL and the University of Duisburg-Essen in Germany, working in collaboration with researchers at the Department of Energy’s SLAC National Accelerator Laboratory, led X-ray laser experiments at SLAC that collected more than 10,000 snapshots of phase-change materials transforming from a glassy to a crystalline state in real time. Image: The research team after performing experiments at SLAC’s Linac Coherent Light Source X-ray laser. Credit: Klaus Sokolowski-Tinten/University of Duisburg-Essen) Please read also the article published on the EUXFEL website: Rigid bonds enable new data storage technology
The key to successful learning lies in a learner’s knowledge of various strategies, how they can be used, and when and why to employ them. Success also depends on the self-regulatory skills of planning, monitoring, and evaluating learning. Metacognitive awareness encompasses these two components: 1) knowledge of cognition and 2) regulation of cognition. Knowledge of cognition refers to what learners know and understand about the way they learn. It includes declarative knowledge about the factors that influence performance (e.g. knowing one’s capacity limitations). It includes procedural knowledge about how to execute different procedures (e.g. how to chunk and categorize new information). And finally, it includes conditional knowledge about when and why to apply various cognitive strategies (e.g. knowing when and why to create or use a mnemonic device) (Schraw & Moshman, 1995). Regulation of cognition refers to how well learners can regulate and therefore have the ability to adjust or correct their learning. These sets of activities include planning (e.g. allocating appropriate amounts of time and resources to learning). It also includes monitoring one’s comprehension and task performance (e.g. engaging in self-testing). And finally, it includes evaluating (e.g. appraising products and outcomes of one’s learning) (Schraw & Moshman, 1995). Metacognition is a trait that distinguishes expert from novice learners (Bransford, Brown, and Cocking, 2000). Students who are metacognitive are able to consciously focus attention on important information, accurately judge how well they understand something, use intellectual strengths to compensate for weaknesses, and employ fix-up strategies to correct errors. Most importantly, they are their own best self-assessors; this is what is referred to as assessment as learning. Unfortunately, not every student possesses these traits. Many do not create plans for approaching learning tasks, struggle when they get confused, and are unaware of the purposes strategies serve in learning. Therefore, there is a need to help students know themselves as learners, be able to recognize when they have or have not acquired sufficient understanding, identify what needs improving and how to improve it, and reflect on the efficiency of the processes and strategies used. Teachers can help students develop self-regulation as a “powerful mechanism for improving learning” (Hattie, 2012, p. 161). When my friend and colleague, Chris Hickman introduced metacognition to his grade 5 class, one student exclaimed, “You’re blowing my mind!” Thinking about thinking is a novel concept to some learners. We can help students become more metacognitive – learning how to learn can be taught. How do we promote metacognition in the classroom? Below are 3 suggestions that teachers can put into practice to encourage metacognitive habits of mind. #1. Explicitly teach students study skills. One of the best things we can do for students is to help them uncover strategies that lead to successful learning. In his synthesis of the factors relating to achievement, Hattie (2009) classified study skills as cognitive, metacognitive, and affective. Examples of cognitive interventions include note taking and summarization – where the focus is on task-related skills. Metacognitive interventions include self-regulatory skills of planning, monitoring, and evaluating (e.g. setting goals, estimating and budgeting use of time, tracking performance, setting standards and using them to self-assess). Affective interventions focus on motivation and self-concept. Hattie (2012) noted how important it is “to understand a student’s strategies for thinking, so that he or she can be helped to advance his or her thinking” (p. 38) and suggested that teachers can become more aware of the levels at which students process information by listening to students as they think aloud. These metacognitive study skills (self-verbalization and self-questioning) have an effect size of 0.64 (Hattie, 2009). Hattie (2012) also noted that “it is not feasible to teach self-regulation outside the content domains” (p. 102). In order to have an effect on deeper levels of understanding, it is necessary to combine study skills with the content. #2. Structure opportunities for students to evaluate the effectiveness of strategies. One important step in explicitly teaching students study skills is ensuring that they understand the utility and significance of using various strategies. Studies show that better learning results when students are provided with explanations of the reasons why various strategies aid in understanding (Duffy et al., 1987). Paris, Newman, and McVey (1982) found that when students receive explicit instruction regarding the usefulness of the strategy and feedback in their use of it, they behave differently – maintaining higher levels of effective strategy use and decreasing ineffective learning behaviours. Schraw (1998) suggested a Strategy Evaluation Matrix (see Figure 1) to aid in promoting explicit declarative, procedural, and conditional knowledge about different strategies. Schraw (1998) also suggested that students complete, share, discuss, and revise their SEMs as a way to promote strategy use and metacognitive awareness. #3. Target feedback at students’ appropriate instructional level. Hattie and Timperley (2007) designed a structure for giving feedback that identifies properties and circumstances that make feedback effective. The four major levels: self, task, process, and self-regulation are described below. Self – When feedback is directed to the ‘self’ (e.g. “You did you a great job!”) it is unrelated to the student’s performance on the task. Task – When feedback is about the task or product – such as whether work is correct or incorrect, it may include directions to acquire more, different, or correct information (e.g. “You need to include more about the Battle of Vimy Ridge”). Process – The third level of feedback, process level, is more directly aimed at the processing of information or learning processes requiring understanding (e.g. “You need to edit this piece of writing by attending to the topic sentences and paragraphs so that the piece has a better flow”). Self-Regulation – Finally, feedback to students can be focused at the self-regulation level, including greater skill in self-evaluation (e.g. “I think you should refer to the last time you completed a plot diagram. What did you need to change then?”). Hattie and Timperley (2007) note that feedback at the self-regulation level can have major influences on self-efficacy, self-regulatory proficiencies, and self beliefs about students as learners, such that the students are encouraged or informed how to better and more effortlessly continue on the task. Hattie and Timperley (2007) emphasized that the effectiveness of feedback is directly influenced by the level at which it is directed noting that: - – if a student is a novice at something, his/her feedback should be at the task level; - – if a student has some degree of proficiency, his/her feedback should be at the process level; - – if a student has a high degree of proficiency, his/her feedback should be at the self-regulation level. To promote regulation of cognition, teachers can mindfully target feedback to students’ appropriate instructional level. This further underscores how important it is for teachers to listen to students’ thinking aloud as it provides a way to determine which level to target feedback in order to ensure its effectiveness. The underlying idea in promoting metacognition is to develop independent, self-regulated learners. Hattie (2012) noted that “the ‘learning’ aim of any set of lessons is to get students to learn the skills of teaching themselves the content and understanding – that is, to self-regulate their learning. This requires helping students to develop multiple strategies of learning, and to realize why they need to invest in deliberate practice and concentration on the learning” (p. 96). As you consider embedding metacognitive activities in the daily process of teaching, keep the following in mind: #1. Do not treat metacognitive activities as separate or isolated activities. They are best taught in context across all disciplines. #2. Appreciate what is required to achieve effective strategy instruction. It takes time. Bransford, J., Brown, A., & Cocking, R. (2000). How People Learn: Brain, Mind, Experience, and School – Expanded Edition. National Academy Press, Washington, D.C. Duffy, G., Roehler, L., Meloth, M., Vavrus, L., Book, C., Putnam, J., & Wesselman, R. (1986). There relationship between explicit verbal explanation during reading skill instruction and student awareness and achievement: A study of reading teacher effects. Reading Research Quarterly, 21(3), 237-252. Hattie, J. (2009). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Routledge, Abingdon, OX. Hattie, J. (2012). Visible Learning for Teachers: Maximizing Impact on Learning. Corwin, Thousand Oaks, CA. Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112. Paris, S., Newman, R., & McVey, K. (1982). Learning the functional significance of mnemonic actions: A microgenetic study of strategy acquisition. Journal of Experimental Child Psychology 34, 490-509. Schraw, G. (1998). Promoting general metacognitive awareness. Instructional Science, 26(1-22), 113-125. Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7(4), 351-371.
Try our Free Online Math Solver! Lecture VII - Number Theory Number theory encompasses anything relating to properties of integers. In typically encounter problems involving divisibility and factorization . In this lecture we will let p1, p2, . . . represent the prime numbers in ascending order so that pn is the nth prime number. We let gcd(p, q) represent the greatest common denominator and let lcm(p, q) the least common multiple of integers p and q. 1 Divisibility and Factoring The Fundamental Theorem of Arithmetic says that any positive integer n can be in exactly one way as the product of prime numbers, so that the factorizations of p and q are identical if and only if p = q. The number f divides n if and only if none of the powers of the primes in the of f are greater than those of n. Specifically, f divides n k times if and only if there is no prime p is the factorization of f that appears more than 1/k times as often as it appears in the factorization of n. On a related note, if some integer f divides integers p and q, then f divides mp + nq, where m and n are any integers. Quick question: How many times does 3 divide 28!? We reason that the answer is the sum of how many times 3 divides each of 1, 2, . . . , 28. Of the numbers 1 through 28, exactly a multiples of 3, are multiples of 3^2, etc. (where is the floor function and represents the greatest integer less than or equal to x). To count the total number of p’s appearing in their factorizations, we compute 9+3+1+0+0+0+· · · = 13. The generalized result: Theorem : Aprime number p divides n! exactly times. This fact enables us to determine how many 0’s appear at the end of n!. are more 2’s than 5’s in the factorization of n!, the number of 0’s at the end of n! is the number of 5’s in its factorization. Quick question: How many factors does 120 have? We factor 120 and find that 120 = Therefore, any m = divides 120 must satisfy . There are 4 possible m1, 2 possible m2, and 2 possible m3, meaning that there are 4 · 2 · 2 = 16 positive integers that divide 120. Moreover: Theorem : factors. The greatest common divisor of m and n is defined to be the largest integer both m and n. Two numbers whose largest common divisor is 1 are called relatively prime even though neither m nor n is necessarily prime. There are two notable ways to compute • Factoring - Let 0. Then gcd(m, n) is the positive integer whose prime factorization contains pi exactly min(mi, ni) times for all positive integers i. Remark - This is useful if the factorizations of m and n are readily available, but if m and n are large numbers such as 4897, they will be difficult to factor. • Euclidean Algorithm - Let n > m. If m divides n, then gcd(m, n) = m. Otherwise, gcd(m, n) = gcd(m, n − m · ). Remark - This is useful when factoring fails. For example, finding gcd(4897, 1357). 1357 does not divide 4897, so = 3, 4897 − 3 · 1357 = 826 and gcd(4897, 1357) = gcd(1357, 826). 826 does not divide 1357, so gcd(1357, 826) = gcd(826, 531). 531 does not divide 826 so gcd(862, 531) = gcd(531, 295). Continuing this process, gcd(531, 295) = gcd(295, 236) = gcd(236, 59) = The least common multiple of m and n is defined to be the least number that is divisible by both m and n. Other than listing multiples of m and n, we can determine the lcm by the formula: lcm(m, n) = . Note that because gcd(m, n) ≥ 1, we have lcm(m, n) ≤ mn. The Euler Phi function, , denotes the number of positive integers less than or equal to n that are relatively prime to n. If we let denote all of the distinct prime numbers that divide p, then: 2 Modulo Trickery The division algorithm states that when dividing n by p ≠ 0, there is exactly one integer q such that n = pq + r, where . We define n modulo p (or simply m mod p) to be r in this equation . We use the notation r n (mod p) when solving equations. There are a number of theorems that apply to modulos, some of which are outlined here: • k · n + c c (mod n), for any integers k, n, and c. This follows from the definition • (mod n), for any integers k, n, and c, and any positive integer m. This is the result of binomial expansion of the left side. • (mod p), for relatively prime integers a and p, where p is prime. A result known as Fermat’s Little Theorem. • (mod n), for any relatively prime integers a and n, where is Phi function. This is Euler’s Generalization of Fermat’s Little Theorem. • (p − 1)! −1 (mod p), for any prime p. This is Wilson’s Theorem. Whenever the word remainder appears, you should immediately think modulos. Likewise , determining the last few digits of a number should make you consider modulos. The above theorems are merely suppliments to the algebra that can be performed on modular equations, which we outline here. The rules of modular arithmetic can be summarized 1. The only numbers that can be divided by m in modulo n are those that are multiples of gcd(m, n). 2. When multiplying by m in modulo n, the only numbers that can result are multiples of gcd(m, n). 3. Taking the square root of both sides is “normal” only in prime modulos. (For example, the solutions to (mod 8) are not only (mod 8) but more completely 4. When solving for integer solutions in modulo n, any integer multiple of n can be added to or subtracted from any number. (This includes adding multiples of n to square roots of negative numbers .) 5. All other operations behave normally according to the standard rules of algebra over Consider, for example, solving for all positive n ≤ 100 for which is divisible by 43. Of course we set up (mod 43). We apply the quadratic formula and find that (mod 43). Because −123 −123 + 43k (mod 43), we replace -123 with −123 + 5 · 43 = 49 and continue: (mod 43), so n 3,−4 (mod 43). Therefore, all such n are 3, 39, 46, 82, and 89. All of the following problems can be solved with the techniques enumerated above. 1. How many factors does 800 have? 2. How many times does 7 divide 100!? 3. What is the smallest positive integer n for which is non-zero and reducible? 4. In Mathworld , the basic monetary unit is the Jool, and all other units of currency are equivalent to an integral number of Jools. If it is possible to make the Mathworld equivalents of $299 and $943, then what is the maximum possible value of a Jool in terms of dollars ? 5. What are the last three digits of 6. Compute the remainder when 2000! is divided by 2003. 7. (ARML 1999) How many ways can one arrange the numbers 21, 31, 41, 51, 61, 71, and 81 such that any four consecutive numbers add up to a multiple of 3? 8. Determine all positive integers n ≤ 100 such that is divisible by 73.
Teaching young toddlers the ABC can be a challenging and fun activity! Getting this first lesson right, is important, as this is going to be the first formal learning experience for the child, and sets his mindset and attitude for lessons ahead. So it is important for teachers to be well-skilled to make this an interesting learning experience for the young student, who is at this age – eager to lap up knowledge in this exciting, new world that is opening up to him. Some smart ways of teaching are those that incorporate the maximum use of sensory skills. So teachers must use visual representation, sound, feel and touch to get the same message across, so that: - The message is iterated in different ways and this gives a holistic learning experience to the child. - Different children find different mediums easy to understand, so using a mix, enables each child to learn better through the medium he is naturally best suited to. Here are some ways the different sensory skills can be used, and ways at which Treamis too looks at teaching its young students. - Looking at alphabet posters. - Looking at alphabet books. - Looking for letters in the newspaper or magazines. - Listening to the ABC song. - Singing the ABC song. - Listening to and repeating nursery rhymes that mention certain letters. Touch and Movement - Touching plastic alphabet shapes. - Tracing letters cut from sandpaper with fingers. - Making letters out of clay.
Sega Master System / Mark III / Game Gear A stack is a general data structure which stores data in a last-in, first-out structure. Many processors, including the Z80, use a stack to facilitate function calls (and by extension interrupts) and temporary data storage for user code. Data may be added to the stack, which is known as a push, or removed from it (pop). The stack is defined by the stack pointer register sp. This contains the memory address of the current "top" of the stack. When a 16-bit value (two bytes) is pushed onto the stack, sp is decremented by two, and the data is written to the memory location it now points to. When a value is popped, the 16-bit value is retrieved from the currently pointed location and the stack pointer is then incremented by two. Thus, the stack is an area of memory extending from one byte before the initial sp value downwards through the address space. For user code, only regular register pairs ( iy) can be pushed onto the stack. Other opcodes and operations implicitly using the stack ( rst, hardware interrupts, retn) push/pop the program counter ( The most simple use is to save the value in a register for later use. A contrived example is: It is common to do this when you need to access a particular register, but it contains an important unknown value which cannot be preserved in another register. An unoptimised function implementation might push all registers at its start, and pop them all at the end, to make itself reusable, but efficiency might require that this is not always done. For interrupt handlers, it is sometimes necessary. If an operation can only be performed on certain registers, push/pop can be used to transfer values; however, it is generally slower than a normal register copy. Push and pop opcodes are relatively slow and can be used to provide delays. For example, will use 29 cycles in 4 bytes and has no effect on the system. rst will push the pc onto the stack. Due to the way the Z80 operates, this will have been pre-incremented so it points to the next instruction after the call. A hardware interrupt will have a similar effect, except that it may occur mid-instruction (for instructions like ldir) but it is all handled by the Z80 so it works correctly. returning from a function/interrupt, pc is popped from the stack and execution carries on as expected. Thus, to use functions and interrupts, it is necessary to define the stack correctly. It is common practice on many systems to pass function parameters on the stack in the form: (or variants of this, according to the calling convention in use.) The function is responsible for extracting the parameters without losing the return address, so more advanced stack manipulation (such as dealing directly with the value of sp) is needed. In general, this is not recommended in Z80 code. It takes more space and is far slower than passing parameters in registers. If you need to pass in more parameters than can fit into the available registers, consider using a memory structure instead. If your code contains deeply recursive function calls, or contains some mis-matched pushes, the stack will grow and grow until it exceeds the space available to it. This will result in it overwriting other memory, and eventually it may run out of RAM altogether. Thus, don't have deeply recursive function calls (if you can help it; all recursions can be made into iterations) or mis-matched pushes (at all). The effect on the stack of a recursive function can be alleviated by optimising it to require less stack space per recursion. The stack pointer sp has to be set before any interrupts can happen, because they will attempt to use it. Thus, in any program's startup code, it has to perform a ld sp,nnnn operation as early as possible - usually, immediately after the initial im 1 instructions. Because the stack grows downwards in RAM, it is common practice to start it at the highest available address, because then it grows into the "unused space" above the "regular" memory area, which is conventionally allocated starting at the lowest address. However, on the SMS, the very highest area of RAM is affected by various common hardware register writes - writes to $fffc-$ffff for paging, $fff8-$fffb for the 3D glasses, and certain other registers from $fff0 for official Sega development hardware - which are mirrored in RAM at $dff0-$dfff. Thus, it is common practice to initialise the stack pointer to $dff0:
WHAT IS JAPANESE KNOTWEED? Japanese knotweed (Fallopia Japonica) is an herbaceous perennial, native to Japan, where it is thought to have evolved as a first- coloniser of post-volcanic soils. In its native environment it is heavily eaten by insects, losing approximately 40% of it leaf cover each year.To combat this predation, it has exceptionally fast vertical growth, capable of growing up to 10cm each day. It is also excellent at fixing nitrogen in the soil (better than any other plant in the UK). In Japan, this accumulation of nitrogen allows other species to establish and out-compete the plant, causing it to go dormant after about 50 years. In the UK it is a different story altogether. Introduced in 1850 and valued as an ornamental plant in large gardens, its highly vigorous nature soon became apparent. With no predators to eat its leaves, by 1886 it became established on brownfield sites in Wales. By 1905 it was advised against keeping the plant in the garden unless kept in check. A hundred years later and it has become established throughout the UK, colonising river banks, railway lines, motorway verges and vacant plots. Every plant in the UK is female, each one a descendant from that first female plant introduced over 150 years ago. As such, it does not produce viable seeds, but instead spreads by an extending network of underground rhizomes. These can extend 7m from the parent plant before sending up new shoots and stems that create new stands. If the rhizome material is disturbed or dug-up, it only takes a gram of material for it regenerate into a new plant. Wherever the plant becomes established it will impact upon the local environment. Its thickly matted stems, reaching 2-3m, reducing biodiversity by outcompeting all native plants, its extending rhizomes exploiting weaknesses in built structures, causing lenders to withhold mortgages from properties infested by the plant, and its presence on development land causing costly delays to homebuilders. It is not all bad news. In the Far East it is used in medicine and eaten, apparently tasting similar to rhubarb. Its stems can be used as a vegetable dye, its extensive roots to stabilise embankments and its flowers a valuable late-season source of nectar to pollinators. However, when all is said and done, those Victorians have much to answer for!
Uterine cancer is the most common cancer of a woman’s reproductive system. Uterine cancer begins when normal cells in the uterus change and grow uncontrollably, forming a mass called a tumor. A tumor can be benign (noncancerous) or malignant (cancerous, meaning it can spread to other parts of the body). Noncancerous conditions of the uterus include fibroids (benign tumors in the muscle of the uterus), endometriosis (endometrial tissue on the inside of the uterus. and endometrial hyperplasia (an increased number of cells in the uterine lining). This type of cancer is sometimes called endometrial cancer. 1. Pelvic Exam : Doctor checks your uterus, vagina, and nearby tissues for any lumps or changes in shape or size. 2. Physical Exam : A Thorough Medical History and Physical Examination is done. 3. Ultrasound : An Ultrasound device uses sound waves that can’t be heard by humans. The sound waves make a pattern of echoes as they bounce off organs inside the pelvis. The echoes create a picture of your uterus and nearby tissues. The picture can show a uterine tumor. For a better view of the uterus, the device may be inserted into the vagina (transvaginal ultrasound). 4. Biopsy : The removal of tissue to look for cancer cells is a biopsy. A thin tube is inserted through the vagina into your uterus. Your doctor uses gentle scraping and suction to remove samples of tissue. A pathologist examines the tissue under a microscope to check for cancer cells. In most cases, a biopsy is the only sure way to tell whether cancer is present. There are Two Major Types of Uterine Cancer : • Adenocarcinoma : This type of cancer makes up more than 95% of uterine cancers. Cancer that forms in the tissue lining the uterus (the small, hollow, pear-shaped organ in a woman’s pelvis in which a fetus develops). Most endometrial cancers are adenocarcinomas (cancers that begin in cells that make and release mucus and other fluids). • Sarcoma : This form of uterine cancer develops in the myometrium (the uterine muscle) or in the supporting tissues of the uterine glands. Chemotherapy : Chemotherapy is the use of drugs to kill cancer cells, usually by stopping the cancer cells’ ability to grow and divide. Systemic chemotherapy is delivered through the bloodstream to reach cancer cells throughout the body. Chemotherapy is given by a medical oncologist, a doctor who specializes in treating cancer with medication. A chemotherapy regimen (schedule) usually consists of a specific number of cycles given over a set period of time. Radiation therapy : Radiation therapy is the use of high-energy x-rays or other particles to kill cancer cells. A doctor who specializes in giving radiation therapy to treat cancer is called a radiation oncologist. A radiation therapy regimen (schedule) usually consists of a specific number of treatments given over a set period of time. The most common type of radiation treatment is called external-beam radiation therapy, which is radiation given from a machine outside the body. Hormone therapy : Hormone therapy is used to slow the growth of uterine cancer cells. Hormone therapy for uterine cancer involves the sex hormone progesterone, given in a pill form which reduces the amount of the hormone estrogen in a woman’s body by stopping tissues and organs other than the ovaries from producing it. Hormone therapy may be used for women who cannot have surgery or radiation therapy or in combination with other types of treatment. After three years of pain, difficulty walking, and an inability to work, I have finally entered a new hopeful and exciting chapter in my life. Because of the absolute impossibility of finding a facility that could help me, I searched on the internet and found my surgeon Dr Sarup an exceptional and experienced surgeon , a solution that has changed my life. The surgery was painless for me. The rehabilitation was strict and very effective, the nursing and the support staff were all trained, kind and infinitely caring. -Ms Njoku Joyce, Nigeria Please scan and email your medical reports for a Free, No Obligation Opinion from India’s leading Surgeons/ Specialist Doctors at India’s Best Hospitals with in 48 Hours of receipt.
What is international law? International law deals with relationships that go beyond the borders of any one country. It is used in situations such as: - dealing with crimes in international waters - regulating international travel - regulating international trade - deciding on boundaries between countries - regulating the use of armed force - regulating human rights. International law applies to relationships between states, between states and individuals, and between individuals. New Zealand’s involvement in international law International laws are often made by international organisations. New Zealand’s involvement with international law increased as the country became more active on the world stage. In 1919 New Zealand was a founding member of the League of Nations, which was set up after the First World War. Also in 1919 New Zealand became part of the new International Labour Organization, which sets rules about employment and labour matters. New Zealand also became part of the United Nations (UN), which was founded after the Second World War. The UN has a range of international agencies that deal with and set rules around issues such as health, food, education, science, culture, aviation, trade and refugees. In the second half of the 20th century New Zealand signed treaties and joined organisations to control nuclear weapons. Along with Australia, New Zealand took France to the International Court of Justice to challenge French testing of nuclear weapons in the Pacific. International agreements are used to govern trade between countries. New Zealand is a member of the World Trade Organization and has signed free-trade agreements with several countries. International law on human rights and conflict An important role of international law is to protect human rights during both war and peace. The Geneva Conventions are rules that aim to protect civilians and other non-combatants (including wounded soldiers and prisoners of war) during armed conflicts. After wars, people who did not follow the conventions may be tried for war crimes in special courts or at the International Criminal Court. International law also covers human rights during peacetime, and many international rules and treaties have been incorporated into New Zealand law. There are a number of courts that deal with international disputes: - the International Court of Justice (ICJ) - the International Criminal Court (ICC) - other international tribunals that are set up for specific purposes, such as to deal with war crimes in Rwanda or former Yugoslavia. International law on travel, trade and resources Other areas that international law covers include: - international travel and immigration, including refugees - international trade and disputes over trade - the law of the sea, which includes rules around what happens in international waters, fishing, maritime accidents and the environment.
|Previous||Table of Contents||Next| A chosen-plaintext attack can be particularly effective if there are relatively few possible encrypted messages. For example, if P were a dollar amount less than $1,000,000, this attack would work; the cryptanalyst tries all million possible dollar amounts. (Probabilistic encryption solves the problem; see Section 23.15.) Even if P is not as well-defined, this attack can be very effective. Simply knowing that a ciphertext does not correspond to a particular plaintext can be useful information. Symmetric cryptosystems are not vulnerable to this attack because a cryptanalyst cannot perform trial encryptions with an unknown key. In most practical implementations public-key cryptography is used to secure and distribute session keys; those session keys are used with symmetric algorithms to secure message traffic . This is sometimes called a hybrid cryptosystem. Using public-key cryptography for key distribution solves a very important key-management problem. With symmetric cryptography, the data encryption key sits around until it is used. If Eve ever gets her hands on it, she can decrypt messages encrypted with it. With the previous protocol, the session key is created when it is needed to encrypt communications and destroyed when it is no longer needed. This drastically reduces the risk of compromising the session key. Of course, the private key is vulnerable to compromise, but it is at less risk because it is only used once per communication to encrypt a session key. This is further discussed in Section 3.1. Ralph Merkle invented the first construction of public-key cryptography. In 1974 he registered for a course in computer security at the University of California, Berkeley, taught by Lance Hoffman. His term paper topic, submitted early in the term, addressed the problem of Secure Communication over Insecure Channels . Hoffman could not understand Merkles proposal and eventually Merkle dropped the course. He continued to work on the problem, despite continuing failure to make his results understood. Merkles technique was based on puzzles that were easier to solve for the sender and receiver than for an eavesdropper. Heres how Alice sends an encrypted message to Bob without first having to exchange a key with him. Eve can break this system, but she has to do far more work than either Alice or Bob. To recover the message in step (3), she has to perform a brute-force attack against each of Bobs 220 messages in step (1); this attack has a complexity of 240. The x values wont help Eve either; they were assigned randomly in step (1). In general, Eve has to expend approximately the square of the effort that Alice expends. This n to n2 advantage is small by cryptographic standards, but in some circumstances it may be enough. If Alice and Bob can try ten thousand keys per second, it will take them a minute each to perform their steps and another minute to communicate the puzzles from Bob to Alice on a 1.544 MB link. If Eve had comparable computing facilities, it would take her about a year to break the system. Other algorithms are even harder to break. Handwritten signatures have long been used as proof of authorship of, or at least agreement with, the contents of a document. What is it about a signature that is so compelling ? In reality, none of these statements about signatures is completely true. Signatures can be forged, signatures can be lifted from one piece of paper and moved to another, and documents can be altered after signing. However, we are willing to live with these problems because of the difficulty in cheating and the risk of detection. We would like to do this sort of thing on computers, but there are problems. First, computer files are trivial to copy. Even if a persons signature were difficult to forge (a graphical image of a written signature, for example), it would be easy to cut and paste a valid signature from one document to another document. The mere presence of such a signature means nothing. Second, computer files are easy to modify after they are signed, without leaving any evidence of modification. |Previous||Table of Contents||Next|
Fruit and nut trees will grow well if irrigated regularly. Drought stress will reduce fruit size and stunt growth especially in young trees. If the water status of the plant is severely deficient the leaves will wilt, curl, and sunburn. The fruit can be dramatically affected, too, through reduction in size, water loss and shrivel, and sunburn. Good irrigation practices in California include the application of water at sufficient intervals in order to never induce significant plant stress. This will ensure the maximum plant growth, fruit size, and yield. In some circumstances, however, a slight water stress induced at specific growth stages can improve fruit flavor, enhance sugar or oil content, and limit vegetative growth. Water quality may be an issue in parts of California with salt, or mineral excess problems. Irrigation water should be tested for its mineral content to avoid toxicity to plants if there is a problem expected. More trees are lost to over-irrigating than probably to any other cause. Over-irrigation combined with poor drainage especially leads to tree death. For the period after leaf drop in the fall and until shoot and leaf growth get underway in the spring, trees normally will not need irrigation. Irrigation recommendations are often stated in the following ways: “irrigate when needed,” “irrigate thoroughly, but not frequently.” The meaning of these statements are unclear for many, especially the novice and those with the tendency to irrigate more when plants appear unhealthy. The key to irrigating any plant is “how much” and “how often.” To water optimally you must know: - Daily water use - Soil type - Amount of water applied - The area a plant covers - Rooting depth - Efficiency adjustments Putting It Together The amount of water a fruit tree uses depends primarily on how big it is and how hot the day is. Several other factors influence water use such as relative humidity and wind, but they are less important. The water use by fruit trees is amazingly similar between species. The goal is maximum growth in the early years to fill the allotted space and maximum production of large fruit. This requires a lot of water in a state where the days are hot and dry. All fruit trees grown for high production have green succulent growth. If the amount of leaves covering an area is the same then the species or variety of tree does not make much difference. The greatest difference in water use is due to tree size. In the Guide, look at the difference between a tree that occupies 36 ft2 (6 ft. × 6 ft.) and one roughly three times that size 100 ft2 (10 ft. × 10 ft.). The water use is three times (5.6 gallons per day compared to 15.6 gallons per day). Water use for a medium sized semi-dwarf fruit tree is about 16 gallons of water per day on a hot summer day on the coast of California without any fog influence (0.25"/day). That same tree in the Sacramento or San Joaquin Valley would be about 19 gallons per day (0.3"/day). Therefore, a tree with two, one-gallon drip emitters on each side would have to be irrigated about 8–9 hours every day. The theory and practice of drip irrigation is to provide just what the tree needs every day. Not enough water is applied to leave any in storage in the soil for the next day, so it needs to be watered again the next day. Drip irrigation is a good delivery system because it only wets a small area so that weed growth is limited and the system is easily adapted to many landscape situations. Fortunately only a small fraction (10–20%) of the root area needs to be watered in order to achieve good results. Soil type or depth has very little influence on drip irrigated trees since the water use rate is determined by weather and trees size. Soil water holding capacity is unimportant due to daily irrigations. Based on tree response from irrigation studies, it has been determined that for young trees it is beneficial to irrigate them by a factor of 2 (double) until the trees reach 70% full cover. It seems that “over irrigated” young trees grow even better than if they receive their daily water use allotment based on evapotranspiration. See the example below: For drip irrigation, start irrigating in early spring before much soil moisture has been used because this stored water may be needed later in case the system is accidentally shut down. Soil type or depth is almost inconsequential, and only 10–20% of the rooting area need be wetted for good tree performance. A 2 year-old semi-dwarf fruit tree occupies a space of 10 ft2. It has 2, 1 gal/hr emitters, and on a warm spring day the water use rate (ET) is about 0.20 in/day. How much: 1.25 gal/day (from Guide) × 2 (for an efficiency adjustment on young trees with 10–15% canopy) = 2.5 gal/day. How often: 2.5 gal/day divided by two emitters per tree = 75 minutes per tree every day or 2.5 hours every other day. A mature standard sized (large) fruit tree occupies and area of 300 ft2 with four, one 1 gal/hr emitters per tree. On a hot summer day it uses 0.25 in/day (ET). How much: 46.8 gallons per day (from Guide). How often: 46.8 divided by 4 emitters = 11.7 hours everyday. Every other day = 23.4 hours. Mini-sprinklers are small sprinklers with the water delivered through drip irrigation tubing. Each individual mini-sprinkler usually delivers about 10 gallons per minute or 10 times the average drip emitter. The mini-sprinkler system is typically run twice to three times per week with some water held in the soil in storage. Run times can be calculated (from the Guide), multiplied by number of days between irrigation intervals. Care must be taken to investigate the depth that the irrigation water is reaching for mini-sprinklers since some of them shoot the water so far that they would have to run continuously for days in order to water down 24 inches. Most fruit tree roots are located between 6 inches and 24 inches of the top of the soil. This is also the area with all the nutrients (topsoil) and the oxygen. Keep this area moist at all times and really focus on maintaining adequate moisture there. The old adage of forcing the tree roots down deep is just that it is forcing the tree and causing stress. Home orchard trees that are on deep soils can get by with less intensive irrigation management because the tree roots are deeper and there is a buffering capacity for drought stress. Shallow soils need to be managed much more intensely with frequent lighter irrigations. Sprinkler irrigated trees use the same amount of water as drip irrigated trees (which is based on how hot it is) plus an additional 20% for loss to evaporation and non-uniformity of application. The real difference is that with sprinkler irrigated trees, more water is applied at once, it is stored in the soil for 2-3 weeks before the next irrigation, and the entire area is watered. When the whole area under the trees is irrigated, water can not be saved based on tree size. Weed growth also covers a much greater area. Another important difference for sprinkler irrigated trees is that soil rooting depth (volume of soil) and soil water holding capacity (soil type, sand or clay) becomes important since water is stored in the soil. If trees are over irrigated water is lost beyond the root zone. Under irrigation is usually caused by not running the sprinklers long enough to wet the entire depth of the root zone or miscalculating the amount of water stored in the particular soil type and going too long between irrigation intervals. For sprinkler irrigation, water is not applied daily, but on a periodic basis to fill the soil, which acts as a storage reservoir for water available to the plant. Soil type and rooting characteristics are very important. Recent research shows beneficial results from irrigating at or before 50-75% depletion of the (soil-stored) available water, then applying what has been used + 20% for efficiency loss. A mature standard size (large) fruit tree occupying an area of 300 ft. A rooting depth of 3 ft., loam soil, and a daily water use (ET) of 0.25 in/day in July. How much: 3 ft. rooting depth × 2" of available water per feet = 6" of available water. 6" x 75% depletion = 4.5" = amount of water to apply + 20% = 5.4" How long: Use the Catch Can Test to measure how long it takes to apply 1" of water × 5.4" = the duration of set. Most sprinklers apply about 0.3" of water per hour, so it would take 18 hours to apply 5.4" of water. How often: 5.4" of water divided by 0.25 in/day = 21 days. If you want to figure out how many gallons of water the tree would use you need two other figures: 27,154 gallons in an acre-inch and 43,560 square feet in an acre = 1,010 gallons of water per tree. For a more complete, detailed discussion of this subject, see Micro-Irrigation of Trees and Vines: A Handbook for Water Managers, by Schwankl, Hanson and Prichard, published for University of California Irrigation Program, by the University of California, Davis, 1995.
Grade: Kindergarten, First Materials: book What’s Alive? by Kathleen Weidner Zoehfeld, small Ziploc bags with 6 pictures (of living/nonliving things) in each (1 per pair of students), Living/Nonliving Things t-chart, Living/Nonliving Things Pictures” worksheet Objective(s): Students will be able to 1) Identify a set of living things from a set of non-living things and 2) List 3-5 things a living thing needs to stay alive. Anticipatory Set: With students gathered together, state the lesson objectives. Show students two objects–one real and one artificial and ask what they have in common (e.g. a potted plant and a stuffed animal). Write answers on the board or chart paper future use. Ask students if one of the objects is alive. How do they know? Tell students that there are ways to determine if something is a living thing or a nonliving thing. Write down new vocabulary as it is discussed to solidify concepts taught. Ask students to think about how they know if something is alive. Direct Instruction/Guided Practice: Introduce students to the book What’s Alive? by Kathleen Weidner Zoehfeld. As you read, prompt students with questions to guide their thinking of how living things need certain things that nonliving things don’t need. After reading, return to the two objects. Ask students if they think one of them is alive. How do they know? Students should recognize which one is living. As a group, discuss what the living plant needs to stay alive. Write answers down to solidify concepts. Next, pair students and give each a small bag with 6 pictures (from books, magazines, or Google images) in it. Tell students that they are to look at the pictures and determine if each is a living or nonliving thing. After a few minutes, go through the pictures as a group. Have a “t-chart” available that is labeled living and nonliving. Call on students to place pictures in each category. Discuss why each belongs in the category they are in (because living things need–air, water, food, sunlight and shelter, etc.). Independent Practice: Students will now have the opportunity to sort pictures by themselves. Provide the “Living/Nonliving Things t-chart” to each student along with the “Living/Nonliving Things Pictures” worksheet. Instruct students on how to complete the activity. Monitor students as they work. Interview each by asking them to identify 3-5 things a living thing needs to stay alive. Closure: Bring students together to discuss the “Big Ideas” from the lesson. Review key vocabulary. Ask students if they are alive. Have them turn to a friend and tell how they know. Assessment: Students will be assessed according to their ability to participate in the group discussion and lesson activities. They will also be assessed according to their ability to: 1) Identify a set of living things from a set of non-living things and 2) List 3-5 things a living thing needs to stay alive. Accommodations: Students that have difficulty with attention or impulsivity should either be kept in close proximity to the teacher during the group discussion/activity or be allowed to stand where they will not disrupt others, but can move rather than sit. Some students may benefit from having the number of pictures in the independent activity decreased. Allow students to refer to the pictures and vocabulary on the board for assistance. Extensions: 1)Provide students with a small sheet of paper and have them list things in the classroom that are living and nonliving. Tell them to include anything they may see as they look out of the classroom window. 2) As a creative writing assignment and an added challenge, provide students an opportunity to form acrostic poems using the words LIVING and/or NONLIVING. 3) Allow students the opportunity to cut out pictures from magazines of living things and the things they need to stay alive.
At the base of the brain, there are right and left mammillary bodies. These also go by their Latin name, corpus mamillare. Each “body” has a round and smooth shape. They are part of the limbic system. Each mammillary body joins the pretectum, thalamus, and other parts to make up the greater diencephalon part of the brain. These bodies are connected directly to the brain, and they relay impulses to the thalamus. The overall route, from the amygdalae to the thalamus, is often referred to as the Papez circuit. Along with the dorsomedial and anterior nuclei of the thalamus, each mammillary body plays in active role in how recognitional memory (like seeing someone’s face and remembering you’ve met before) is processed. Some believe the bodies add sensory smell detail to stored memories. Memory loss could result from damage to either mammillary body. Typically, damage results from prolonged thiamine (vitamin B1) shortages in the body. Some symptoms and complications of Wernicke-Korsakoff syndrome may also play a role. Wernicke-Korsakoff syndrome is a spectrum of brain disorders caused by thiamine deficiency. This is usually the result of alcoholism. Wernicke encephalopathy is an earlier stage of Korsakoff’s syndrome. Symptoms include loss of muscle coordination, vision problems, memory loss, and inability to form new memories.
Beckwith-Wiedemann syndrome is a condition that affects many parts of the body. It is classified as an overgrowth syndrome, which means that affected infants are considerably larger than normal (macrosomia) and tend to be taller than their peers during childhood. Growth begins to slow by about age 8, and adults with this condition are not unusually tall. In some children with Beckwith-Wiedemann syndrome, specific parts of the body on one side or the other may grow abnormally large, leading to an asymmetric or uneven appearance. This unusual growth pattern, which is known as hemihyperplasia, usually becomes less apparent over time. The signs and symptoms of Beckwith-Wiedemann syndrome vary among affected individuals. Some children with this condition are born with an opening in the wall of the abdomen (an omphalocele) that allows the abdominal organs to protrude through the belly-button. Other abdominal wall defects, such as a soft out-pouching around the belly-button (an umbilical hernia), are also common. Some infants with Beckwith-Wiedemann syndrome have an abnormally large tongue (macroglossia), which may interfere with breathing, swallowing, and speaking. Other major features of this condition include abnormally large abdominal organs (visceromegaly), creases or pits in the skin near the ears, low blood sugar (hypoglycemia) in infancy, and kidney abnormalities. Children with Beckwith-Wiedemann syndrome are at an increased risk of developing several types of cancerous and noncancerous tumors, particularly a form of kidney cancer called Wilms tumor and a form of liver cancer called hepatoblastoma. Tumors develop in about 10 percent of people with this condition and almost always appear in childhood. Most children and adults with Beckwith-Wiedemann syndrome do not have serious medical problems associated with the condition. Their life expectancy is usually normal. Beckwith-Wiedemann syndrome affects an estimated 1 in 13,700 newborns worldwide. The condition may actually be more common than this estimate because some people with mild symptoms are never diagnosed. The genetic causes of Beckwith-Wiedemann syndrome are complex. The condition usually results from the abnormal regulation of genes in a particular region of chromosome 11. People normally inherit one copy of this chromosome from each parent. For most genes on chromosome 11, both copies of the gene are expressed, or "turned on," in cells. For some genes, however, only the copy inherited from a person's father (the paternally inherited copy) is expressed. For other genes, only the copy inherited from a person's mother (the maternally inherited copy) is expressed. These parent-specific differences in gene expression are caused by a phenomenon called genomic imprinting. Abnormalities involving genes on chromosome 11 that undergo genomic imprinting are responsible for most cases of Beckwith-Wiedemann syndrome. At least half of all cases result from changes in a process called methylation. Methylation is a chemical reaction that attaches small molecules called methyl groups to certain segments of DNA. In genes that undergo genomic imprinting, methylation is one way that a gene's parent of origin is marked during the formation of egg and sperm cells. Beckwith-Wiedemann syndrome is often associated with changes in regions of DNA on chromosome 11 called imprinting centers (ICs). ICs control the methylation of several genes that are involved in normal growth, including the CDKN1C, H19, IGF2, and KCNQ1OT1 genes. Abnormal methylation disrupts the regulation of these genes, which leads to overgrowth and the other characteristic features of Beckwith-Wiedemann syndrome. About twenty percent of cases of Beckwith-Wiedemann syndrome are caused by a genetic change known as paternal uniparental disomy (UPD). Paternal UPD causes people to have two active copies of paternally inherited genes rather than one active copy from the father and one inactive copy from the mother. People with paternal UPD are also missing genes that are active only on the maternally inherited copy of the chromosome. In Beckwith-Wiedemann syndrome, paternal UPD usually occurs early in embryonic development and affects only some of the body's cells. This phenomenon is called mosaicism. Mosaic paternal UPD leads to an imbalance in active paternal and maternal genes on chromosome 11, which underlies the signs and symptoms of the disorder. Less commonly, mutations in the CDKN1C gene cause Beckwith-Wiedemann syndrome. This gene provides instructions for making a protein that helps control growth before birth. Mutations in the CDKN1C gene prevent this protein from restraining growth, which leads to the abnormalities characteristic of Beckwith-Wiedemann syndrome. About 1 percent of all people with Beckwith-Wiedemann syndrome have a chromosomal abnormality such as a rearrangement (translocation), abnormal copying (duplication), or loss (deletion) of genetic material from chromosome 11. Like the other genetic changes responsible for Beckwith-Wiedemann syndrome, these abnormalities disrupt the normal regulation of certain genes on this chromosome. In about 85 percent of cases of Beckwith-Wiedemann syndrome, only one person in a family has been diagnosed with the condition. However, parents of one child with Beckwith-Wiedemann syndrome may be at risk of having other children with the disorder. This risk depends on the genetic cause of the condition. Another 10 to 15 percent of people with Beckwith-Wiedemann syndrome are part of families with more than one affected family member. In most of these families, the condition appears to have an autosomal dominant pattern of inheritance. Autosomal dominant inheritance means that one copy of an altered gene in each cell is typically sufficient to cause the disorder. In most of these cases, individuals with Beckwith-Wiedemann syndrome inherit the genetic change from their mothers. Occasionally, a person who inherits the altered gene will not have any of the characteristic signs and symptoms of the condition. Rarely, Beckwith-Wiedemann syndrome results from changes in the structure of chromosome 11. Some of these chromosomal abnormalities are inherited from a parent, while others occur as random events during the formation of reproductive cells (eggs and sperm) or in the earliest stages of development before birth. - Wiedemann-Beckwith syndrome (WBS)
Division of whole Numbers is related to equal distribution of the objects in any number of parts. If done for smaller whole numbers, we can do it with repeated subtraction method. In the case of larger numbers, we need to use the method of Long Division, for which we need to remember the tables of the whole numbers. There are certain rules which if remembered will make the process of division easier or dividing whole numbers easier. - Division of a number by 1: For dividing Whole Number by 1, we must remember that the result is always the dividend . - Division of a number by 0: whenever 0 is divided by any whole number , the result is always 0. - Remember that when ever any even number is divided by 2 the remainder is always 0 and when any odd number is divided by 2 the remainder is always 1. - To verify the division , we must remember the formula: Dividend = Quotient x Divisor + Remainder. If the LHS = RHS, it means that the calculations done for the division are correct.
The Earth moved for many over this Valentine’s Day, but not due to romance – a magnitude 6.0 earthquake struck around 250 km off the west coast of Oregon on February 14, 2012. The Valentine’s Day quake is notable not only for its size (it is one of the largest ever to have occurred in the state or off its coast) but also in terms of the complex tectonic setting in which it occurred. The Oregon Earthquake of February 2012 The Oregon earthquake occurred on an ocean ridge, at a divergent boundary – where new crust is created by upward movement of hot and buoyant rock from the earth’s interior – between two of the large slabs of crust (tectonic plates) which make up the surface of the earth. Preliminary information from the United States Geological Survey (USGS) shows that the quake occurred at a depth of 10km on the fracture zone associated with the Juan de Fuca Ridge, which marks the western boundary of the Juan de Fuca microplate. The Tectonic Setting: Divergent Boundaries Although plate tectonics is generally described in simplistic terms (large crustal plates and single faults) the local situation is inevitably more complex. At its simplest we might describe a setting in which a divergent boundary in the east drives the Juan de Fuca plate eastwards against the North American plate, forcing it beneath the North American continent. In the case of the Juan de Fuca microplate, at least two other small plates, the Explorer Plate to the north and the Gorda plate to the south, are involved. The relative direction, type of movement (with convergent, divergent and conservative boundaries present within a relatively small area) and speed of these plates generates tensions and, ultimately, earthquakes.Decoded Science
One third of lupus patients have eye symptoms, but these are not as widely publicized as the effects on the skin, nervous system, and kidneys. Medications used to treat lupus can have side effects involving the eye, but the disease itself can cause much more damage to the visual system. Monitoring eye health in lupus is important for two reasons: (1) lupus ocular complications are potentially blinding, and (2) eye symptoms are an indication of less obvious disease activity, such as kidney damage, that tends to occur in cycles of exacerbation and remission. Thus eye health can be useful in adjusting the dosage of medication to optimize the balance between its risks and benefits. Lupus can damage almost any part of the eye. The mechanisms include immune complex deposition, inflammation of the blood vessels, blood clots, and antibody dependent cytotoxicity. External eye disease. Discoid lupus-type rash may develop over the eyelids. The lacrimal (tear) gland can be damaged by antibody dependent mechanisms, causing dry eye from secondary Sjogren's syndrome. In less common cases, the space around the eye (the orbit) can be damaged by orbital masses, fluid accumulation, or inflammation. Anterior segment disease. The cornea can be damaged by recurrent corneal erosions, inflammation, and loss of epithelial tissue. The sclera, or white of the eye, can become inflammed enough to cause severe pain. In less common cases, conjunctivitis (pink eye) may be caused by lupus. Pain is generally a symptom of disease of the front part of the eye (anterior segment), while visual loss generally indicates problems with the back part of the eye (posterior segment). Posterior segment disease. Both the retina and choroid can be damaged, leading to loss of vision. Neuro-ophthalmic disease. Both the optic nerve and the nerves that control eye motion can be affected. Abnormalities of the pupil also indicate lupus disease activity. Ophthalmic disease as a side effect of treatment. Corticosteroids are commonly used to treat lupus. They are highly effective, but may cause cataracts and glaucoma.
Although there is no known drug to cure parvovirus, or parvo, supportive care to combat the virus' symptoms proves to be effective in some cases. Veterinary treatments, including the intravenous infusion of fluids to combat dehydration, diarrhea and vomiting, are used in most cases. In the more extreme cases of parvo, blood plasma transfusions and other intensive care treatments may be used. Parvo is a highly contagious disease most common to canines. This disease attacks rapidly reproducing cells and white blood cells. The most severely attacked cells are those within the intestinal tract. Young animals that contract parvovirus and survive its effects often have lifelong cardiac problems due to damage to the heart muscles. Parvo is transmitted between animals through the contact of the infected animal's feces and is capable of living on surfaces for months. For this reason, sanitation of soiled areas is required to rid areas of the virus. Blood-tinged diarrhea, vomiting, lethargy and loss of appetite are among the first symptoms commonly seen in an animal infected with parvo. Because other health problems such as intestinal parasites can display symptoms similar to those of parvo, only veterinarians can diagnose the parvovirus through a test of the infected animal's stool. The good news is that parvo is effectively prevented through routine vaccinations of animals beginning when they are young.
Today’s Wonder of the Day was inspired by Alissya from , . Alissya Wonders, “how did the mayans dissapear” Thanks for WONDERing with us, Alissya ! When you think of the great cultures of the ancient world, what peoples come to mind? The Greeks and Romans probably spring to mind instantly. You might also think of the ancient Egyptians and Babylonians. The Maya Empire originated and thrived in the tropical area that we now call Guatemala. Over time, their empire expanded northward into the Yucatan Peninsula in modern-day Mexico. As a culture, the Maya were experts at many things: agriculture, language, mathematics, art, architecture, and even astronomy. The Maya reached their height of influence and power during the sixth century A.D. Mysteriously, only a few hundred years later, they were gone. By around 900 A.D., nearly all of their large stone cities had been abandoned. What happened to this great society? The disappearance of the Maya has intrigued scholars for hundreds of years. Recently, new scientific theories have emerged that might explain the demise of the Maya. Some scientists now believe the Maya probably contributed significantly to their own downfall. For hundreds of years, the Maya dominated southern Mexico and Central America. They were so successful as a culture that their population grew to be quite large. At their peak, the density of the Maya population was similar to modern-day Los Angeles with over 2,000 people per square mile. How did the Maya support such a large and growing population? They were excellent farmers who excelled at growing corn. To grow enough crops, the Maya had to cut down large areas of forest to make room for more fields. They also used trees for building materials and as fuel for limestone kilns that produced the lime plaster they used to build the buildings and temples that remain to this day. Scientists believe the mass-scale deforestation that the Maya were responsible for ultimately led to their demise. The deforestation likely led to climate change in the form of rising temperatures and reduced rainfall. These factors combined to create conditions that led to a severe drought that lasted nearly a century. The effects of the drought, plus unsustainable farming practices, meant that the Maya no longer had the food and water they needed to survive. Large cities were eventually abandoned as people moved away to search for the resources they needed to survive. Today, scientists continue to study the Maya to learn important lessons that can still be applied in modern times. Discovering how the Maya contributed to their own demise can help modern scientists and farmers develop sustainable farming practices that will prevent large-scale disasters in the future.
(i) To be sensitised about the judicious use of energy fossil fuels. (ii) To think and suggest ways of conserving fossil fuels. Fossil fuels which are one of the basic sources of energy for all our activities are exhaustible. For example, coal, kerosene, and LPG are sources of energy for cooking, heating, burning in our households. Petrol and diesel used for transport and in industry are also derived from fossil fuels. A large fraction of electricity is produced by burning coal. Fuel wood, though renewable, is fast depleting due to excessive use. By judicious use of these resources one can conserve fossil fuels and reduce the cost of living. 1. Visit atleast 10 houses in your neighbourhood and find out the types of fuels used for cooking, heating and boiling of food and water. 2. Also find out the type and condition of the chulha (cooking stove), burner, oven, etc. used for the purpose. 3. Find out the average consumption per month in terms of money. 4. Find out the sources of leakage or wastage of energy, if any. 5. Record your observations. 6. Discuss with members of families as to how consumption of fuels can be reduced. Project Book in Environmental Education There is finite a a m fossil fu ount of Follow-up el foun d Earth . In term on s 1. Suggest steps to reduce the consumption of electricity or other fuels years o els f produ of in your school (specially where mid-day meal is prepared in the left ction , Oil = school). 45 year Gas = 7 s, 2. Encourage people to use solar water heater and solar cooker. 2 years , Co = 252 years. Th al is means our sup ply of non renewa ble fossil fu els is ve ry limited . Prepare a report.
shadow, like many imaged on the Moon's surface, is surrounded by a bright aureole. It is an example of the "Opposition Effect". Lunar soil has an open structure with many areas of deep shadow. But, when looking in a direction directly away from the sun, shadows are hidden by the object casting them. The antisolar point and the adjacent areas therefore appear brighter than elsewhere because they have more sunlit surfaces and less shadow. There are other factors that contribute to the glow, retroreflection by crystalline minerals and a phenomenon called coherent backscattering. The heiligenschein, also at the antisolar point, is a separate effect. The opposition effect was so named because it is substantially responsible for the brightness of the Moon and Mars at opposition, i.e. when they are near the antisolar point in our sky. The Moon's brightness at full is greater than can be accounted for by the increase in its illuminated area compared with its partial phases.
As individuals we all have various preferred ways of doing things. We may prefer to stay up late or get up early in the morning. We may prefer to text rather than call. We may prefer to play a game rather than just watch. Whatever our preference may be for a given situation, it is important to understand our own preferences. By understanding our particular preferences, we as learners can use these preferences to understand our learning strengths and limitations. This in turn will allow us to place ourselves in a better position to succeed academically. There are several resources that can help assess an individual’s preferences. Among these are the Myers-Briggs Type Indicator, Kolb’s Learning Style Inventory, Howard Gardner’s multiple intelligences, and Herrmann Brain Dominant Instrument. People have a preferred learning style stemming from right mode/left mode preferences and general personality preferences. Learning style is an individuals preferred way of learning. Different people learn in different ways. Each of us has a natural preference for the way in which we prefer to receive, process, and impart information. Some people tend to pick up information better when it is presented verbally, while others learn better when it is presented visually through pictures. The way we communicate with one another and interpret the communication depends on the way our brains processes and/or thinks about the information. The way our brains processes information depends on our brain dominance or preferred thinking style. Some individuals may think more creatively, while others think more analytically. Also, some may think more linear, while other holistic. These preferred thinking styles also influence the way we learn. Personality indicators, such as the MBTI, can also how you like to learn and interact with others. It will be able to give an individual insight to how they may react in a certain situation. It can help determine if you like to jump into an activity or first watch to see how it is done. First, it is import for the individual to understand how they learn best. After you have an understanding of your preferred learning style, you want to utilize strategies to help enhance that preferred style or mode. The resources below provide tools to help assess and gauge an individual’s personal preferences. VAK (Visual, Auditory, & Kinesthetic) VARK (Visual, Auditory, & Kinesthetic) Herrmann Brain Dominance Left Brain vs. Right Brain Theory Whole Brain Theory Myer Briggs Type Indicator (MBTI) The Herrmann Brain Dominant Instrument is a based on the idea that one side of the brain is dominant over the other. The two halves of the brain are then divided into a front and back half, making four sections in the brain. Individuals are dominant in one of these four areas, which is evident by their personality type. A: Left cerebral hemisphere – analytical B: Left limbic system – sequential C: Right limbic system – interpersonal D: Right cerebral hemisphere – imaginative The Myers-Briggs Personality Type Indicator is a self-inventory questionnaire designed to identify a person’s personality type, strengths, and preferences. The questionnaire was developed by Isabel Myers and her mother Katherine Briggs based on the teachings of Carl Jung and his theory of personality types. The MBTI assessment designed to measure psychological preferences in how people perceive the world and make decisions. The Myers-Briggs Type Indicator categorizes results based upon four dimensions: Howard Gardner proposed his Theory of Multiple Intelligences. His theory suggests that all people have different kinds of “intelligences.” Gardner’s different “intelligences” represent talents, personality traits and abilities. Traditional formal education emphasizes the verbal-linguistic and logical-mathematical intelligences over the other areas of intellect. Gardner believed that there are several areas in which people can excel. These areas of Multiple Intelligences include:
AUTOR is based on scientific research involving adults with a diagnosis of autism. The research studies were conducted at the University of Wolverhampton, UK. You can scroll down for a list of peer-reviewed publications but before that we explain our main findings underpinning the development of AUTOR. You can also watch a TEDx video about AUTOR here: https://www.youtube.com/watch?v=5jNwceqD06g The main technology we used to study how people with autism read and use the web is called eye tracking. It follows the eyes of a person on the screen and this way we are able to analyse the places which they look at and for how long. Sometimes these “places” could be single words or entire phrases, which gives us valuable information about which parts of the text are difficult to understand. Below you can see an image of how eye tracking works. A person is reading a text about chemical elements. When they read a complex word such as “sulphur” or “chlorine” their eyes spend longer time processing the difficult word, which is why the dots on the complex words are bigger than the dots on the easy words. Some of the main findings from our eye tracking studies are listed below. - Linguistic factors which significantly affect the reading comprehension of people with autism include (but are not limited to): the number of words per sentence, the number of metaphors per text, the average number of words occurring before the main verb in a sentence, and the similarity of the syntactic structure in adjacent sentences. If a text contains less of these, then readers with autism will be able to comprehend it better. - Participants with autism spent significantly longer time looking at images inserted in the text, compared to readers without autism. This means that attention works differently in autism and may have implications for how people with autism read. This is no surprise given all previous studies on this subject but mind that inclusion of logos, advertisements or any other visual information, which is not directly relevant to the meaning of the text will distract readers with autism more than the rest of the readers. - Images inserted into text have statistically significant effects on the subjective perception of readers with autism on how well they comprehend and memorise the text but not on their actual comprehension and memorisation. This means that they strongly prefer to read texts with images, even though this may not actually help them comprehend and memorise the texts better. - Both photographs and symbols are suitable for adults with autism. Bear in mind that this may not be the case for children. - Participants with autism take significantly longer to read a text. This means that in the case of videos, you may have to allow longer times for the users to read the text or captions and to process the visual information. - Participants with autism found it significantly more difficult to find information on web pages compared to people without autism. We still do not know why this is the case but we have found that the two groups search for information in a different way: participants with autism tend to “check” more elements of the web page before they arrive at the one they are looking for. Below you can see the scanpath of a person without autism (in green) and a person with autism (purple). We are currently conducting more experiments to find what helps readers with autism read better. Keep an eye on this page if you want to find out about our future results. The research studies about the development of AUTOR are published in a number of peer-reviewed publications. You can see the list of already publications below. Some of the publications on AUTOR are still under peer review and this list will be regularly updated to feature the new content. - Yaneva, V., Temnikova, I. and Mitkov, R. 2016. A Corpus of Text Data and Gaze Fixations from Autistic and Non-autistic Adults. Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC), Portoroz, Slovenia, 25 – 28 May - Yaneva, V., Temnikova, I. and Mitkov, R. 2016. Evaluating the Readability of Text Simplification Output for Readers with Cognitive Disabilities. Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC), Portoroz, Slovenia, 25 – 28 May - Yaneva, V., Evans, R. and Temnikova, I. 2016. Predicting Reading Difficulty for Readers with Autism Spectrum Disorder. Proceedings of Workshop on Improving Social Inclusion using NLP: Tools and Resources (ISI-NLP) held in conjunction with LREC 2016, Portoroz, Slovenia, 23 May - Yaneva, V., Temnikova, I. and Mitkov, R. 2015. Accessible Texts for Autism: An Eye-Tracking Study. ASSETS 2015. The 17th International ACM SIGACCESS Conference of Computers and Accessibility, Lisbon, Portugal, 26-28 October. pp. 49-57 - Yaneva, V. and Evans, R. 2015. Six Good Predictors of Autistic Text Comprehension. In: Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP 2015), Hissar, Bulgaria, 5-11 September 2015. pp. 697 – 706 - Yaneva, V. 2015. Easy-read Documents as a Gold Standard for Evaluation of Text Simplification Output. In Proceedings of the Student Research Workshop at the International Conference on Recent Advances in Natural Language Processing (RANLP 2015) , Hissar, Bulgaria, 5-11 September 2015. pp. 30-36 - Niculae, V. and Yaneva, V. 2013. Computational considerations of comparisons and similes. In: Proceedings of ACL 2013 Student Research Workshop, pp. 89-95. Sofia, Bulgaria, August 2013.
How to Make a Good Science Project Here's some straightforward advice on making a good science project: Be fresh and original. You're on the right site for that! Solar and renewable energy are still out of the ordinary and few people know that you can make a lot of renewable energy projects out of simple materials at home. Clearly define your goal. Can you meet it realistically? Is it worthwhile? Make sure it's your own. It's okay to get some help and some advice, but be sure that you could repeat the entire experiment on your own and that you understand everything. Follow the scientific method if your project is an experiment. Here's a brief summary: - Ask a worthwhile question: - What happens when you? - Why does? - How does this work? - Make a hypothesis: write down what you expect to happen in your experiment. - Plan your experiment: think of all the variables and ways to get varied results. What will you need? What will be your actions, step by step? What variables can you change? - Carry out and observe. What is the best way to record your observations? What data will you record? How will you record it? Is there a better way to do your experiment? What problems did you encounter? - Results. Plot your data, graph it, or other. Compare your observations. See if you can come up with a concise way (e.g. mathematical formula) to describe them. - Conclusions: What did happen (compared to what you expected). Come up with best explanations. How could you improve your experiment? Was it successful? Prove your point. Even if your experiment doesn't work out the way you hypothesized, be sure that something is proven and that you do have a proper conclusion. Have some fun. You have to care about your project. Does it have personal meaning to you? Do you think it's really interesting? Make sure it's clear and clean. Make everything as concise as possible. Eliminate all unnecessary data. All observations, notes, results, etc. should follow in order and never wander too far from your focus. Keep the concluding project presentation clean and neat. Take some time to anticipate questions from judges, teachers and others. Go back to the Solar4Scholars home page for a complete list of science projects.
Load factors: airplane operating limits. The preceding sections from the subpages of the only briefly considered some of the practical points of the principles of flight. To become a pilot, a detailed technical course in the science of aerodynamics is not necessary. However, with responsibilities for the safety of passengers, the competent pilot must have a well-founded concept of the forces which act on the airplane, and the advantageous use of these forces, as well as the operating limitations of the particular airplane. Any force applied to an airplane to deflect its flight from a straight line produces a stress on its structure; the amount of this force is termed “load factor.” A load factor is the ratio of the total airload acting on the airplane to the gross weight of the airplane. For example, a load factor of 3 means that the total load on an airplane’s structure is three times its gross weight. Load factors are usually expressed in terms of “G”—that is, a load factor of 3 may be spoken of as 3 G’s, or a load factor of 4 as 4 G’s. It is interesting to note that in subjecting an airplane to 3 G’s in a pullup from a dive, one will be pressed down into the seat with a force equal to three times the person’s weight. Thus, an idea of the magnitude of the load factor obtained in any maneuver can be determined by considering the degree to which one is pressed down into the seat. Since the operating speed of modern airplanes has increased significantly, this effect has become so pronounced that it is a primary consideration in the design of the structure for all airplanes. With the structural design of airplanes planned to withstand only a certain amount of overload, a knowledge of load factors has become essential for all pilots. Load factors are important to the pilot for two distinct reasons: 1. Because of the obviously dangerous overload that is possible for a pilot to impose on the airplane structures; and 2. Because an increased load factor increases the stalling speed and makes stalls possible at seemingly safe flight speeds. Load factors in airplane design The answer to the question “how strong should an airplane be” is determined largely by the use to which the airplane will be subjected. This is a difficult problem, because the maximum possible loads are much too high for use in efficient design. It is true that any pilot can make a very hard landing or an extremely sharp pullup from a dive, which would result in abnormal loads. However, such extremely abnormal loads must be dismissed somewhat if airplanes are built that will take off quickly, land slowly, and carry a worthwhile payload. The problem of load factors in airplane design then reduces to that of determining the highest load factors that can be expected in normal operation under various operational situations. These load factors are called “limit load factors.” For reasons of safety, it is required that the airplane be designed to withstand these load factors without any structural damage. Although the Code of Federal Regulations requires that the airplane structure be capable of supporting one and one-half times these limit load factors without failure, it is accepted that parts of the airplane may bend or twist under these loads and that some structural damage may occur. This 1.5 value is called the “factor of safety” and provides, to some extent, for loads higher than those expected under normal and reasonable operation. However, this strength reserve is not something which pilots should willfully abuse; rather it is there for their protection when they encounter unexpected conditions. The above considerations apply to all loading conditions, whether they be due to gusts, maneuvers, or landings. The gust load factor requirements now in effect are substantially the same as those that have been in existence for years. Hundreds of thousands of operational hours have proven them adequate for safety. Since the pilot has little control over gust load factors (except to reduce the airplane’s speed when rough air is encountered), the gust loading requirements are substantially the same for most general aviation type airplanes regardless of their operational use. Generally, the gust load factors control the design of airplanes which are intended for strictly nonacrobatic usage. An entirely different situation exists in airplane design with maneuvering load factors. It is necessary to discuss this matter separately with respect to: (1) Airplanes which are designed in accordance with the Category System (i.e., Normal, Utility, Acrobatic); and (2) Airplanes of older design which were built to requirements which did not provide for operational categories. Airplanes designed under the Category System are readily identified by a placard in the cockpit, which states the operational category (or categories) in which the airplane is certificated. The maximum safe load factors (limit load factors) specified for airplanes in the various categories are as follows: CATEGORY LIMIT LOAD Normal* 3.8 to –1.52 Utility (mild acrobatics, including spins) 4.4 to –1.76 Acrobatic 6.0 to –3.0 * For airplanes with gross weight of more than 4,000 pounds, the limit load factor is reduced. To the limit loads given above, a safety factor of 50 percent is added. There is an upward graduation in load factor with the increasing severity of maneuvers. The Category System provides for obtaining the maximum utility of an airplane. If normal operation alone is intended, the required load factor (and consequently the weight of the airplane) is less than if the airplane is to be employed in training or acrobatic maneuvers as they result in higher maneuvering loads. Airplanes that do not have the category placard are designs that were constructed under earlier engineering requirements in which no operational restrictions were specifically given to the pilots. For airplanes of this type (up to weights of about 4,000 pounds) the required strength is comparable to present-day utility category airplanes, and the same types of operation are permissible. For airplanes of this type over 4,000 pounds, the load factors decrease with weight so that these airplanes should be regarded as being comparable to the normal category airplanes designed under the Category System, and they should be operated accordingly. Load factors in steep turns In a constant altitude, coordinated turn in any airplane, the load factor is the result of two forces: centrifugal force and gravity. Figure 1: Two forces cause load factor during turns. For any given bank angle, the rate of turn varies with the airspeed; the higher the speed, the slower the rate of turn. This compensates for added centrifugal force, allowing the load factor to remain the same. Figure 2 reveals an important fact about turns— that the load factor increases at a terrific rate after a bank has reached 45° or 50°. The load factor for any airplane in a 60° bank is 2 G’s. The load factor in an 80° bank is 5.76 G’s. The wing must produce lift equal to these load factors if altitude is to be maintained. Figure 2: Angle of bank changes load factor. It should be noted how rapidly the line denoting load factor rises as it approaches the 90° bank line, which it reaches only at infinity. The 90° banked, constant altitude turn mathematically is not possible. True, an airplane may be banked to 90° but not in a coordinated turn; an airplane which can be held in a 90° banked slipping turn is capable of straight knifeedged flight. At slightly more than 80°, the load factor exceeds the limit of 6 G’s, the limit load factor of an acrobatic airplane. For a coordinated, constant altitude turn, the approximate maximum bank for the average general aviation airplane is 60°. This bank and its resultant necessary power setting reach the limit of this type of airplane. An additional 10° bank will increase the load factor by approximately 1 G, bringing it close to the yield point established for these airplanes. Load factors and stalling speeds Any airplane, within the limits of its structure, may be stalled at any airspeed. When a sufficiently high angle of attack is imposed, the smooth flow of air over an airfoil breaks up and separates, producing an abrupt change of flight characteristics and a sudden loss of lift, which results in a stall. A study of this effect has revealed that the airplane’s stalling speed increases in proportion to the square root of the load factor. This means that an airplane with a normal unaccelerated stalling speed of 50 knots can be stalled at 100 knots by inducing a load factor of 4 G’s. If it were possible for this airplane to withstand a load factor of 9, it could be stalled at a speed of 150 knots. Therefore, a competent pilot should be aware of the following: • The danger of inadvertently stalling the airplane by increasing the load factor, as in a steep turn or spiral; and • That in intentionally stalling an airplane above its design maneuvering speed, a tremendous load factor is imposed. Reference to the charts in figures 2 and 3 will show that by banking the airplane to just beyond 72° in a steep turn produces a load factor of 3, and the stalling speed is increased significantly. If this turn is made in an airplane with a normal unaccelerated stalling speed of 45 knots, the airspeed must be kept above 75 knots to prevent inducing a stall. A similar effect is experienced in a quick pullup, or any maneuver producing load factors above 1 G. This has been the cause of accidents resulting from a sudden, unexpected loss of control, particularly in a steep turn or abrupt application of the back elevator control near the ground. Figure 3: Load factor changes stall speed. Since the load factor squares as the stalling speed doubles, it may be realized that tremendous loads may be imposed on structures by stalling an airplane at relatively high airspeeds. The maximum speed at which an airplane may be stalled safely is now determined for all new designs. This speed is called the “design maneuvering speed” (VA) and is required to be entered in the FAA-approved Airplane Flight Manual or Pilot’s Operating Handbook (AFM/POH) of all recently designed airplanes. For older general aviation airplanes, this speed will be approximately 1.7 times the normal stalling speed. Thus, an older airplane which normally stalls at 60 knots must never be stalled at above 102 knots (60 knots x 1.7 = 102 knots). An airplane with a normal stalling speed of 60 knots will undergo, when stalled at 102 knots, a load factor equal to the square of the increase in speed or 2.89 G’s (1.7 x 1.7 = 2.89 G’s). (The above figures are an approximation to be considered as a guide and are not the exact answers to any set of problems. The design maneuvering speed should be determined from the particular airplane’s operating limitations when provided by the manufacturer.) Since the leverage in the control system varies with different airplanes and some types employ “balanced” control surfaces while others do not, the pressure exerted by the pilot on the controls cannot be accepted as an index of the load factors produced in different airplanes. In most cases, load factors can be judged by the experienced pilot from the feel of seat pressure. They can also be measured by an instrument called an “accelerometer,” but since this instrument is not common in general aviation training airplanes, the development of the ability to judge load factors from the feel of their effect on the body is important. A knowledge of the principles outlined above is essential to the development of this ability to estimate load factors. A thorough knowledge of load factors induced by varying degrees of bank, and the significance of design maneuvering speed (VA) will aid in the prevention of two of the most serious types of accidents: 1. Stalls from steep turns or excessive maneuvering near the ground; and 2. Structural failures during acrobatics or other violent maneuvers resulting from loss of control. Load factors and flight maneuvers Critical load factors apply to all flight maneuvers except unaccelerated straight flight where a load factor of 1 G is always present. Certain maneuvers considered in this section are known to involve relatively high load factors. TURNS—Increased load factors are a characteristic of all banked turns. As noted in the section on load factors in steep turns and particularly figures 2 and 3, load factors become significant both to flight performance and to the load on wing structure as the bank increases beyond approximately 45°. The yield factor of the average light plane is reached at a bank of approximately 70° to 75°, and the stalling speed is increased by approximately one-half at a bank of approximately 63°. STALLS—The normal stall entered from straight level flight, or an unaccelerated straight climb, will not produce added load factors beyond the 1 G of straight-and-level flight. As the stall occurs, however, this load factor may be reduced toward zero, the factor at which nothing seems to have weight; and the pilot has the feeling of “floating free in space.” In the event recovery is effected by snapping the elevator control forward, negative load factors, those which impose a down load on the wings and raise the pilot from the seat, may be produced. During the pullup following stall recovery, significant load factors sometimes are induced. Inadvertently these may be further increased during excessive diving (and consequently high airspeed) and abrupt pullups to level flight. One usually leads to the other, thus increasing the load factor. Abrupt pullups at high diving speeds may impose critical loads on airplane structures and may produce recurrent or secondary stalls by increasing the angle of attack to that of stalling. As a generalization, a recovery from a stall made by diving only to cruising or design maneuvering airspeed, with a gradual pullup as soon as the airspeed is safely above stalling, can be effected with a load factor not to exceed 2 or 2.5 G’s. A higher load factor should never be necessary unless recovery has been effected with the airplane’s nose near or beyond the vertical attitude, or at extremely low altitudes to avoid diving into the ground. SPINS—Since a stabilized spin is not essentially different from a stall in any element other than rotation, the same load factor considerations apply as those that apply to stall recovery. Since spin recoveries usually are effected with the nose much lower than is common in stall recoveries, higher airspeeds and consequently higher load factors are to be expected. The load factor in a proper spin recovery will usually be found to be about 2.5 G’s. The load factor during a spin will vary with the spin characteristics of each airplane but is usually found to be slightly above the 1 G of level flight. There are two reasons this is true: 1. The airspeed in a spin is very low, usually within 2 knots of the unaccelerated stalling speeds; and 2. The airplane pivots, rather than turns, while it is in a spin. HIGH-SPEED STALLS—The average light plane is not built to withstand the repeated application of load factors common to high-speed stalls. The load factor necessary for these maneuvers produces a stress on the wings and tail structure, which does not leave a reasonable margin of safety in most light airplanes. The only way this stall can be induced at an airspeed above normal stalling involves the imposition of an added load factor, which may be accomplished by a severe pull on the elevator control. A speed of 1.7 times stalling speed (about 102 knots in a light airplane with a stalling speed of 60 knots) will produce a load factor of 3 G’s. Further, only a very narrow margin for error can be allowed for acrobatics in light airplanes. To illustrate how rapidly the load factor increases with airspeed, a high-speed stall at 112 knots in the same airplane would produce a load factor of 4 G’s. CHANDELLES AND LAZY EIGHTS—It would be difficult to make a definite statement concerning load factors in these maneuvers as both involve smooth, shallow dives and pullups. The load factors incurred depend directly on the speed of the dives and the abruptness of the pullups. Generally, the better the maneuver is performed, the less extreme will be the load factor induced. A chandelle or lazy eight, in which the pullup produces a load factor greater than 2 G’s will not result in as great a gain in altitude, and in low-powered airplanes it may result in a net loss of altitude. The smoothest pullup possible, with a moderate load factor, will deliver the greatest gain in altitude in a chandelle and will result in a better overall performance in both chandelles and lazy eights. Further, it will be noted that recommended entry speed for these maneuvers is generally near the manufacturer’s design maneuvering speed, thereby allowing maximum development of load factors without exceeding the load limits. ROUGH AIR—All certificated airplanes are designed to withstand loads imposed by gusts of considerable intensity. Gust load factors increase with increasing airspeed and the strength used for design purposes usually corresponds to the highest level flight speed. In extremely rough air, as in thunderstorms or frontal conditions, it is wise to reduce the speed to the design maneuvering speed. Regardless of the speed held, there may be gusts that can produce loads which exceed the load limits. Most airplane flight manuals now include turbulent air penetration information. Operators of modern airplanes, capable of a wide range of speeds and altitudes, are benefited by this added feature both in comfort and safety. In this connection, it is to be noted that the maximum “never-exceed” placard dive speeds are determined for smooth air only. High-speed dives or acrobatics involving speed above the known maneuvering speed should never be practiced in rough or turbulent air. In summary, it must be remembered that load factors induced by intentional acrobatics, abrupt pullups from dives, high-speed stalls, and gusts at high airspeeds all place added stress on the entire structure of an airplane. Stress on the structure involves forces on any part of the airplane. There is a tendency for the uninformed to think of load factors only in terms of their effect on spars and struts. Most structural failures due to excess load factors involve rib structure within the leading and trailing edges of wings and tail group. The critical area of fabric-covered airplanes is the covering about one-third of the chord aft on the top surface of the wing. The cumulative effect of such loads over a long period of time may tend to loosen and weaken vital parts so that actual failure may occur later when the airplane is being operated in a normal manner. The flight operating strength of an airplane is presented on a graph whose horizontal scale is based on load factor. Figure 4: Typical Vg diagram. The diagram is called a Vg diagram—velocity versus “g” loads or load factor. Each airplane has its own Vg diagram which is valid at a certain weight and altitude. The lines of maximum lift capability (curved lines) are the first items of importance on the Vg diagram. The subject airplane in the illustration is capable of developing no more than one positive “g” at 62 m.p.h., the wing level stall speed of the airplane. Since the maximum load factor varies with the square of the airspeed, the maximum positive lift capability of this airplane is 2 “g” at 92 m.p.h., 3 “g” at 112 m.p.h., 4.4 “g” at 137 m.p.h., and so forth. Any load factor above this line is unavailable aerodynamically; i.e., the subject airplane cannot fly above the line of maximum lift capability (it will stall). Essentially the same situation exists for negative lift flight with the exception that the speed necessary to produce a given negative load factor is higher than that to produce the same positive load factor. If the subject airplane is flown at a positive load factor greater than the positive limit load factor of 4.4, structural damage will be possible. When the airplane is operated in this region, objectionable permanent deformation of the primary structure may take place and a high rate of fatigue damage is incurred. Operation above the limit load factor must be avoided in normal operation. There are two other points of importance on the Vg diagram. First, is the intersection of the positive limit load factor and the line of maximum positive lift capability. The airspeed at this point is the minimum airspeed at which the limit load can be developed aerodynamically. Any airspeed greater than this provides a positive lift capability sufficient to damage the airplane; any airspeed less does not provide positive lift capability sufficient to cause damage from excessive flight loads. The usual term given to this speed is “maneuvering speed,” since consideration of subsonic aerodynamics would predict minimum usable turn radius to occur at this condition. The maneuver speed is a valuable reference point, since an airplane operating below this point cannot produce a damaging positive flight load. Any combination of maneuver and gust cannot create damage due to excess airload when the airplane is below the maneuver speed. Next, is the intersection of the negative limit load factor and line of maximum negative lift capability. Any airspeed greater than this provides a negative lift capability sufficient to damage the airplane; any airspeed less does not provide negative lift capability sufficient to damage the airplane from excessive flight loads. The limit airspeed (or redline speed) is a design reference point for the airplane—the subject airplane is limited to 225 m.p.h. If flight is attempted beyond the limit airspeed, structural damage or structural failure may result from a variety of phenomena. Thus, the airplane in flight is limited to a regime of airspeeds and g’s which do not exceed the limit (or redline) speed, do not exceed the limit load factor, and cannot exceed the maximum lift capability. The airplane must be operated within this “envelope” to prevent structural damage and ensure that the anticipated service lift of the airplane is obtained. The pilot must appreciate the Vg diagram as describing the allowable combination of airspeeds and load factors for safe operation. Any maneuver, gust, or gust plus maneuver outside the structural envelope can cause structural damage and effectively shorten the service life of the airplane. This concludes the Load Factors page. You can now go on to the Weight and Balance page or try the FAA Principles of Flight Test.
The sense or act of hearing The number of complete wavelengths that pass a point in a given time A tone's experienced highness or lowness, depends on frequency the chamber bewteen the eardrum and cochlea containing three tiny bones (hammer, anvil, and stirrup) that concentrate the vibrations of the eardrum on the cochlea's oval window A coiled, bony, fluid-filled tube in the inner ear through which sound waves trigger nerve impulses The innermost part of the ear, containing the cochlea, semicircular canals, and vestibular sacs In hearing, the theory that links the pitch we hear with the place where the cochlea's membrane is stimulated In hearing, the theory that the rate of nerve impulses traveling up the auditory nerve matches the frequency of a tone, thus enabling us to sense its pitch. Conduction hearing loss Hearing loss caused by damage to the mechanical system that conducts sound waves to the cochlea Sensorineural hearing loss Hearing loss caused by damage to the cochlea's receptor cells or to the auditory nerves; also called nerve deafness. A device for converting sounds into electrical signals and stimulating the auditory nerve through electrodes threaded into the cochlea The system for sensing the position and movement of individual body parts The sense of body movement and position, including the sense of balance Spinal cord gas a neurological "gate" that blocks pain signals/allows them to go to the brain. The "gate" is opened by activity of pain signals traveling up nerve fibers and is closed by activity in larger fibers or by info coming from brain. The principle that one sense may influence another, as when the smell of food influences its taste an organized whole. Gestalt psychologists emphasized our tendency to integrate pieces of information into meaningful wholes. the organization of the visual field into objects (the figures) that stand out from their surroundings (the ground) The perceptual tendency to organize stimuli into coherent groups. The ability to see objects in three dimensions although the images that strike the retina are two-dimensional; allows us to judge distance A lab device for testing depth perception in infants and young animals Depth cues, such as retinal disparity, that depend on the use of two eyes. A binocular cue for perceiving depth. By comparing images from the retinas in the two eyes, the brain computes distance--the greater the disparity (difference) between the two images, the closer the subject. Depth cues, such as interposition and linear perspective, available to either eye alone. An illusion of movement created when two or more adjacent lights blink on and off in quick succession Perceiving objects as unchanging (having consistent shapes, size, lightness, and color) even as illumination and retinal images change Perceiving familiar objects as having consistent color, even if changing illumination alters the wavelengths reflected by the object. In vision, the ability to adjust to an artificially displaced or even inverted visual field. A mental predisposition to perceive one thing and not another. the controversial claim that perception can occur apart from sensory input; includes telepathy, clairvoyance, and precognition the study of paranormal phenomena including ESP and psychokinesis
Scots Pine - Pinus sylvestris - sometimes known as scotch pine This is one of 3 (the others being yew and juniper) truly native species of pine in the British Isles, also found in the rest of Europe from Spain to Siberia, and was introduced into Southern Canada and the Northeastern USA. The preferred habitat is light sandy soil, full sun and not too wet conditions, away from salt laden winds. The tree grows on average 70 feet high although it has been known to reach 120 feet. The trunk is long and straight with grey/reddish brown bark which forms in deeply etched fissures and scaly plates. It has pairs of blue-green needles, 1.5 -3.5 inches long and circular in cross-section and born on stout dark yellow twigs. The cones are 2 - 3 inches in length, either growing singly or in pairs, but not many are produced until the tree is at least 60 years old. It flowers in May and June, but the seeds are not fully ripe until the October of the next year, and are dispersed from December to March. The wood of the Scots Pine is strong and lightweight and is used for railway sleepers, telegraph poles and general building and fencing work. Pitch, turpentine, resin and medicinal oil are all extracted from this tree. The pine cones can be used to make a honey-yellow dye for wool, by boiling for several hours with salt. The ancient Druids used to burn a log of Scots Pine on the winter solstice to celebrate the passing of the seasons. Also at this time glades of these pines were decorated with shiny objects and stars to represent the Divine Light. This is thought to be the origin of the Yule log and the decoration of Christmas trees. The gaelic name for the Scots Pine was Guibhas (pronounced goo-ass) and place names such as Dalguise, Kingussie and Goose Island may be derived from the presence of the tree rather than from the word 'goose' as is often thought. There is a superstition that pine trees should not be felled during the waning of the moon if they were to be used for ship-building, because the wood would rot. It is now recognized that the moon does influence the flow of sap in many plants and trees, and it may be that the resin content varies accordingly, thus altering the durability of the wood. Legend has it that the Scots Pine was used in the Highlands as a marker - both of property, boundaries and over the graves of warriors and chiefs. In the south it marked drove roads and meadows where drovers could rest with their herds for the night. Throughout mythology the pine, as an evergreen, has been used to symbolise immortality.
Why and how to use a Microfluidic Bubble trap? Gas bubbles present in a liquid sample are a common problem encountered in numerous microfluidic experiments, and their removal in the sample of interest is quite often a major challenge for microfluidicists. Indeed, gas bubbles circulating through a microfluidic system can damage equipment or the biological sample of interest and cause experimental errors : - Bubbles can cause experimental errors to sensors and chromatography columns mainly by letting components dry. - Bubbles inside a biological reactor often increases shear stress, induces cytotoxicity, as cells membranes will stretch under the force of the liquid-air interface. - Bubbles present inside a sample can also lead to pipetting and sampling errors. Therefore, if these are situations you would like to avoid, then you should consider using a microfluidic bubble trap, also known as a debubbler. The bubble trap we invite you to discover uses a micro-porous PTFE membrane. When a fluid containing gas bubbles flows through the trap, the bubbles are expelled through the hydrophobic membrane that allows absolutely no aqueous liquid to leak. It is possible to get rid of bubbles inside a fluid even with a small pressure available, but same as not introducing bubbles into the fluid, the liquid sample has to be pushed towards the bubble trap inlet and not aspirated from the trap outlet (because you would generate bubbles this way). This trap can be used in line to remove the bubbles and secure any application where the presence of bubbles would negatively impact results or damage the sample. This bubble trap can be used in two modes: a passive mode, and an active mode where a vacuum line can be added. In this second mode, the vacuum outlet of a pressure generator, such as the OB1 Mk3, can be used to maximize the bubble trap’s efficiency. This bubble trap can be used with or without vacuum assistance, and a vacuum source such as the OB1 Mk3 can be connected to maximize the bubble trap’s efficiency. This trap is typically used in the range of 0.5 – 2.0 ml/min, but up to 60ml/min can be achieved when a vacuum line is used. The following video shows the in line bubble removing performed with the bubble trap in passive mode. Elveflow Microfluidics eShop Do you want to learn more about our products? Browse our catalog of Microfluidics Kits & Accessories and find the solution that suits your specific needs!
Eastern Tent Caterpillars by Don Janssen, Extension Educator The caterpillar colony uses the nest for protection against weather and predators. They venture out of the nest several times a day to feed on tree leaves. As the caterpillars increase in size, they add additional layers of silk to the nest. Small trees hosting large numbers of caterpillars can easily be defoliated, in which case the caterpillars may move to another tree, make a new nest and continue to feed. Full-grown eastern tent caterpillars may reach 2 or more inches in length. They hatch as leaves are opening in spring from eggs laid the previous summer on tree stems and branches. By mid-June or so, they are ready to spin cocoons for their transformation into adult moths. Adults emerge around the end of June, mate, lay eggs and die. Eggs are laid in a dark, shiny mass on small twigs of the tree the caterpillars fed on or similar trees nearby. Those eggs overwinter and hatch the next spring to start the cycle again. The mature eastern tent caterpillar is a large, hairy, dark-colored caterpillar with a yellow stripe down the center of its back. It's often misidentified as a gypsy moth caterpillar, especially when it's left the nest and is looking for a place to make its cocoon. They're easy to tell apart, however. The eastern tent caterpillar has a stripe on its back; the gypsy moth caterpillar, though it's also large, hairy and dark-colored, has spots on its back -- five pairs of blue spots and six pairs of red ones. Another similar species is the forest tent caterpillar, which has a series of yellowish keyhole-shaped spots on its back. Defoliation by eastern tent caterpillar is rarely a concern unless it occurs in valuable fruit or landscape trees, especially if it occurs year after year. Trees that lose more than 60 percent of their leaves will produce a new set. Repeated use of stored energy in this way may weaken a tree and leave it more vulnerable to other pests, diseases and environmental stresses that ordinarily wouldn't be a problem for a healthy tree. Trees that lose less than 40 percent of their foliage to leaf-eating pests usually suffer no ill effects. Homeowners who don't want to take a chance with valuable fruit trees or ornamentals can remove the tents by hand as soon as they see them and/or spray the foliage with Bacillus thuringiensis, a bacterial disease of caterpillars formulated as a pesticide under a number of trade names. Removing and destroying the shiny, dark-colored, foam-like egg masses from trees anytime after midsummer and before early spring is another effective approach. University of Nebraska-Lincoln Extension in Lancaster County is your on-line yard and garden educational resource. The information on this Web site is valid for residents of southeastern Nebraska. It may or may not apply in your area. If you live outside southeastern Nebraska, visit your local Extension office
MLA Format Paper An MLA format paper is simply one of the two major types of citation styles. You need to have a Humanities, Arts or Social topics before you can use this. On the other hand, the APA format is used primarily for science based topics. But nevertheless, you can still interchange the two. So what is an MLA format paper? When it comes to in-text citation, you only need to capture the part of the document that you wish to use as reference. In MLA format, you need to have the part enclosed in quotation marks. Afterward, you can then put the author’s last name and the page number of his work something like this one (Lougen 8). How to write a research paper will always involve the creation of citation styles to make sure the reference materials are acknowledged. You should have this on your mind every time you write a research paper. For an MLA format paper, the instructions in in-text citation are pretty simple. Now, you should also write the Works Cited page. Whether you are writing an MLA term paper or an MLA research paper, you always need to apply the steps in citation and bibliography page writing. What does the works cited page contain? You need to include the author’s name, publishing year, title of his work, the publication company, page numbers and city of publication. You can check out our samples in the website. We can help you in writing an MLA format paper. Simply fill out the order form in our site and let our writers do the job for you. For more information, kindly contact our reps today.
To reduce the impact of collisions on the network performance, Ethernet uses an algorithm called CSMA with Collision Detection (CSMA / CD): CSMA/CD is a protocol in which the station senses the carrier or channel before transmitting frame just as in persistent and non-persistent CSMA. If the channel is busy, the station waits. it listens at the same time on communication media to ensure that there is no collision with a packet sent by another station. In a collision, the issuer immediately cancel the sending of the package. This allows to limit the duration of collisions: we do not waste time to send a packet complete if it detects a collision. After a collision, the transmitter waits again silence and again, he continued his hold for a random number; but this time the random number is nearly double the previous one: it is this called back-off (that is to say, the "decline") exponential. In fact, the window collision is simply doubled (unless it has already reached a maximum). From a packet is transmitted successfully, the window will return to its original size. Again, this is what we do naturally in a meeting room if many people speak exactly the same time, they are realizing account immediately (as they listen at the same time they speak), and they interrupt without completing their sentence. After a while, one of them speaks again. If a new collision occurs, the two are interrupted again and tend to wait a little longer before speaking again. The entire scheme of CSMA/CD is depicted in the fig. Frame format of CSMA/CD The frame format specified by IEEE 802.3 standard contains following fields. 1. Preamble: It is seven bytes (56 bits) that provides bit synchronization. It consists of alternating Os and 1s. The purpose is to provide alert and timing pulse. 2. Start Frame Delimiter (SFD): It is one byte field with unique pattern: 10 10 1011. It marks the beginning of frame. 3. Destination Address (DA): It is six byte field that contains physical address of packet's destination. 4. Source Address (SA): It is also a six byte field and contains the physical address of source or last device to forward the packet (most recent router to receiver). 5. Length: This two byte field specifies the length or number of bytes in data field. 6. Data: It can be of 46 to 1500 bytes, depending upon the type of frame and the length of the information field. 7. Frame Check Sequence (FCS): This for byte field contains CRC for error detection. Fig. Shows a flow chart for the CSMA/CD protocol. • The station that has a ready frame sets the back off parameter to zero. • Then it senses the line using one of the persistent strategies. • If then sends the frame. If there is no collision for a period corresponding to one complete frame, then the transmission is successful. • Otherwise the station sends the jam signal to inform the other stations about the collision. • The station then increments the back off time and waits for a random back off time and sends the frame again. • If the back off has reached its limit then the station aborts the transmission. • CSMA/CD is used for the traditional Ethernet. • CSMA/CD is an important protocol. IEEE 802.3 (Ethernet) is an example of CSMNCD. It is an international standard. • The MAC sublayer protocol does not guarantee reliable delivery. Even in absence of collision the receiver may not have copied the frame correctly.
Overview of Coma A coma is a deep state of unconsciousness, during which an individual is not able to react to his or her environment. Someone in a coma cannot consciously respond to stimulation. Coma can be caused by an underlying illness, or it can result from head trauma. A comatose person is still very much alive, but he or she is not simply asleep. The brain wave activity in a comatose person is very different from that of a sleeping person; you can wake up a sleeping person, you can't wake a person in a coma. A coma usually does not last for more than a few weeks. Many people recover their full physical and mental functioning when they emerge from a coma. Others require various forms of therapy to recover as much functioning as possible. Some patients never recover anything but very basic body functions. Sometimes, following a coma, a person may enter what is known as a persistent vegetative state; patients in persistent vegetative state have lost all cognitive neurological function but are still able to breathe and may exhibit various spontaneous movements. They may even be awake and appear to be normal but, because the cognitive part of their brain no longer functions, they are not able to respond to their environment. A vegetative state can last for years. There are other terms, in addition to coma and vegetative state, that are used to describe varying levels of unconsciousness and a person's ability to respond to stimuli. These include stupor, in which a person is unconscious but will eventually respond to repeated, vigorous stimulation; and obtundation and lethargy, which are used to describe a person who is not entirely unconscious but does not respond to stimuli. Usually, coma and other altered states of unconsciousness are considered neurological emergencies, and actions need to be taken quickly to avoid permanent damage.
Perhaps it's no surprise that sperm often swim in circles. They must rely on a tail smaller than the width of a human hair to conquer an obstacle course with 200-million-to-one odds. Until now, scientists blamed sperms' circular locomotion on their erratic, asymmetric tail propulsion. But where they swim might be more important than how they swim, according to a study published online tomorrow in the Journal of the Royal Society Interface. Researchers simulated sperm cell migration and found that if the surrounding fluids (such as semen or vaginal secretions) are viscous enough, the sperm tails will buckle in the current, trapping the cells into a circular loop they can't escape even if the cells swim perfectly. Knowing how to trap sperm in the right places could improve the effectiveness of in vitro fertilization and other reproductive technologies, experts say. See more ScienceShots.
Heat waves. Drought. Flooding. Cold spells. Wildfires. The climate system is changing before our very eyes, and there is no more glaring proof than the record-shattering loss of Arctic sea ice this summer. The National Snow and Ice Data Center announced Wednesday that the sea ice covering the Arctic Ocean has smashed the previous record minimum extent set in 2007 by a staggering 18 percent. The impacts of rising temperatures and melting ice extend beyond the far north to us in the United States, as we are poised to feel the weather-related backlash. The ice cover, only half of what it was only a few decades ago, is a stunning visual demonstration of the effects that increasing greenhouse gases, and resulting warming of the Earth, are having on the climate system. Fossil fuels – such as oil, coal, and natural gas – are the main source of these added greenhouse gases, as they’re burned to provide the energy that heats our homes, lights our streets, and runs our vehicles. It now appears, however, that a gradual warming may not be the primary concern, as the gases may also fuel extreme weather around the world. Since the fossil-fuel revolution after World War II, Arctic temperatures have increased at twice the global rate, illustrating a phenomenon called Arctic amplification. Thus, sea ice has melted at an unprecedented rate and is now caught in a vicious cycle known as the ice-albedo feedback: as sea ice retreats, sunshine that would have been reflected into space by the bright white ice is instead absorbed by the ocean, causing waters to warm and melt even more ice. As temperatures over the Arctic Ocean fall with the approach of winter, the extra energy that was absorbed during summer must be released back into the atmosphere before the water can cool to freezing temperatures. Essentially, this loads the atmosphere with a new source of energy—one that affects weather patterns, both locally and on a larger scale. In spring, a similar phenomenon also occurs, but it involves snow cover on northern land areas. Snow has been melting progressively earlier each year; this past June and July it disappeared earlier than ever before. The underlying soil is then exposed to strong spring sun, which allows it to dry and warm earlier – contributing to Arctic amplification in summer months. The difference in temperature between the Arctic and areas to the south is what drives the jet stream, a fast-moving river of air that encircles the northern hemisphere. As the Arctic warms faster, this temperature difference weakens, as does the west-to-east wind of the jet stream. Just as a river of water tends to meander when it reaches the gentle slopes of coastal plains, a weaker jet stream tends to have steeper north-south waves. Arctic amplification also stretches the northern tips of the waves farther northward, which favors further meandering. Meteorologists know that steeper waves are slower to shift westward. The weather we experience at mid-latitudes is largely dictated by these waves in the jet stream. The slower the waves move, the longer the weather associated with them will persist. Essentially, “hot,” “dry,” “cold,” and “rainy” are all terms to describe very normal weather conditions. It’s only when those conditions persist in one area for too long that they are dubbed with the names of their extreme alter egos: heat waves, drought, cold spells, and floods. And these kinds of extreme events are precisely what we’ve seen more of in recent years. Global warming now has a face and a fingerprint that directly touch each of our lives. Rather than just a gradual increase in temperature, we can recognize its influence in a shift toward more extreme weather events. A warmer atmosphere also means a moister atmosphere, so any given storm will have more moisture and energy to work with, increasing the chances of flooding or heavy snows. Arctic amplification adds another mechanism to the mix, making extreme weather more likely. The loss of ice and snow in the far north may load the dice for “stuck” weather patterns, compounding potential risks for our economy, our health, and our security. Even though sea ice shattered its previous record minimum, we cannot yet predict what sort of weather this winter will bring to a particular region of the United States or northern hemisphere. We cannot pinpoint which part of the world will see frigid temperatures, heavy snowfall, or perhaps abnormally mild conditions next season. It is clear, however, that more accurate advanced warning is needed to help vulnerable communities prepare for the extreme conditions in a warming world. We must continue to invest—both financially and intellectually—in research that expands observations, improves computer prediction, and delivers relevant information to decision-makers. Because at this point, I can only say that I think it’s going to be a very interesting winter. Jennifer Francis is a research professor at the Institute of Marine and Coastal Sciences, Rutgers University
If your job is in manufacturing, medicine, mining, automotive repair, underwater or space exploration, maybe even elder care, some of your coworkers are probably semi-autonomous programmable mechanical machines—in a word, robots. But humans and robots don't understand each other well, and they work very differently: a robot does exactly what it's told, over and over, taking the same amount of time every time, while a human acts with intention, deliberation, and variation. Their strengths and weaknesses can be complementary, but only if they each have good models of how the other works. In a breakthrough experiment, the Interactive Robotics Group at MIT discovered that cross-training, which is swapping jobs with someone else on your team to help everyone understand the work better, works even when your coworker doesn't have a mind. In short, when humans and robots model doing each others' job they end up working together more smoothly. In order for this to work, researchers first had to program robots to learn by watching humans instead of just through feedback. Humans were paired with a robotic arm, named Abbie, to practice placing the screws and screwing them in—in a virtual environment. There were two basic rhythms to the task: either have Abbie fasten the screw right after it was placed (1/2, 1/2, 1/2), or place all three screws and then have Abbie screw in the batch of three (1-2-3, 1-2-3). After the humans modeled their actions, and the robots practiced placing the screws, the team moved to a real environment where humans placed screws and Abbie screwed them in. The outcome was fascinating. In the control group, the humans and robots move like awkward dance partners. The human isn't sure where the robot will go next, and doesn't want a screw driven through her hand, so she spends more time waiting around while Abbie is moving. The team that had cross-trained understood each others' preferences much better. They spent 71 percent more time moving at the same moment, a sign of better coordination—like a well-oiled machine, you might say. The humans spent 41 percent less time waiting around for the robot's next action. The humans rated the robot's performance higher, and the robots had a lower "entropy level," meaning they spent less time in uncertainty about what the humans would do next. "What we suspect, and are planning to follow up on, is that the real benefit is coming from adaptation on the human side," said MIT professor Julie Shah, who leads the Interactive Robotics Group. "The person is doing actions in a more repeatable way, developing a better understanding of what the robot can do." Cross-training is a technique the military uses to improve teamwork. How could it work for you? [Robot and Human Image: Daniel Schweinert via Shutterstock]
Problem : What molecule is at the center of respiration?Oxygen. Problem : What are the three main components of the respiratory pathway?Glycolysis, the citric acid cycle, and oxidative phosphorylation. Problem : What general role does oxygen play in respiration?Oxygen helps in respiration by easing the breakdown of food molecules through oxidation. Problem : Fill in the blanks. Respiration that involves oxygen is called __________ respiration while respiration that does not involve oxygen is called __________ respiration.Aerobic. Anaerobic. Take a Study Break!
This is the VOA Special English Environment Report. Every year in the United States people watch for dangerous windstorms called tornadoes. A tornado is a violently turning pipe of air suspended from a dense cloud. It forms when winds blowing in separate directions meet in the clouds and begin to turn in circles. Warm air rising from below causes the wind pipe to reach toward the ground. It is not officially a tornado unless it has touched the ground. A tornado can destroy anything in its path. Tornadoes come in many sizes. They can be thin pipes with openings on the ground just a few meters across. Or they can be huge pipes that stretch as far as one-and-a-half kilometers. A tornado's size is not linked to its strength. Large tornadoes can be very weak, and some of the smallest can be the most damaging. No matter how big or small, however, the strongest winds on Earth are in tornadoes. Tornadoes are most common in the central part of the United States called "Tornado Alley." This area stretches south from western Iowa down to Texas. Weather experts have done a lot of research in Tornado Alley. They have discovered that unlike severe ocean storms, tornadoes can strike without warning. Usually weather experts can report days before a severe ocean storm hits. However, tornadoes can form within minutes. There is almost no time for public warnings before they strike. The force of a tornado is judged not by its size, but by the total damage caused to human-made structures. The Fujita Scale is the device used to measure tornadoes. It is named after Ted Fujita. He was a University of Chicago weather expert who developed the measure in the nineteen-seventies. There are six levels on the measure. Tornadoes that cause only light damage are an F-zero. The ones with the highest winds that destroy well-built homes and throw vehicles more than one-hundred meters are an F-five. In the nineteen-sixties, about six-hundred-fifty tornadoes were reported each year in the United States. Now, more than one-thousand tornadoes are seen yearly. Weather experts do not think the increase is caused by climate changes. Instead, they say Americans are moving away from cities into more open farming areas. This means that they see and report tornadoes more often. This VOA Special English Environment Report was written by Jill Moss.
AS Chemistry - Redox Reactions and Group 2 Elements A redox reaction is a reaction that involves both oxidation (the loss of electrons) and reduction (the gain of electrons). In order to identify whether a reaction is redox or not, you can write separate half equations that show how electrons are lost/gained. For example, take the equation for the reaction of Calcium and Oxygen: 2Ca (g) + O2 (g) -> 2CaO (s), The two half equations for this reaction are: Ca -> Ca 2+ + 2e- (this reaction shows oxidation). O2 + 4e- -> 2O2- (this reaction shows reduction) Therefore, you can conclude that this reaction is a redox reaction because it involves both reduction and oxidation. There is another way of identifying a redox reaction (I personally find this method easier), in which you apply oxidation numbers to the equation to work out what has been oxidised and what has been reduced. Reaction of Calcium and Oxygen Basically there are 10 rules that show which elements and their oxidation numbers take priority in a reaction. So, in order of importance, the 10 rules are as follows: 1) Group 1 elements (all have an oxidation number of +1) 2) Group 2 elements (all have an oxidation number of +2) 3) Group 3 elements (all have an oxidation number of +3) 4) Flourine (with an oxidation number of -1) 5) Hydrogen (with an oxidation number of +1) 6) Oxygen (with an oxidation number of -2) 7) Chlorine (with an oxidation number of -1) 8) Group 7 elements (all have an oxidation number of -1), Group 6 elements (-2) and Group 5 elements (-3). 9) All the other elements, whose oxidation number depends on the oxidation number of the other elements in the equation. 10) When an element is by itself in a reaction and not in a compound, then it's oxidation number is 0. Now, apply this to the example reaction I used earlier between Calcium and Oxygen: 2Ca (g) + O2 (g) -> 2CaO (s) Ca is by itself in this reaction so it's oxidation number is 0. O2 is by itself so it's oxidation number is also 0. Ca in the product CaO has an oxidation number of +2. O in the product CaO has an oxidation number of -2. From this you can see that Calcium has lost 2 electrons (it has gone from 0 to +2) and Oxygen has gained 2 electrons (it has gone from 0 to -2). Therefore Oxygen has been reduced and Calcium has been Oxidised, making this reaction a redox reaction. First Ionisation Energy/kJ mol-1 - Also known as the alkaline earth metals, group 2 consist of the elements Beryllium, Magnesium, Calcium, Strontium and Barium. - They all have reasonably high melting and boiling points, low densities and they all form colourless compounds. - Together with group 1 (the alkali metals), they form the s block of the periodic table because their highest energy electrons are all in s sub-shells (a spherical orbital capable of holding 2 electrons). This means that the alkaline earth metals have 2 electrons in their outer shells. Reactivity increases down group 2, this is due to 3 things: 1) The electron shielding increases as you go down the group. 2) The atomic radii also increases. 3) Nuclear charge increases (because of the increasing number of protons), however this is overpowered by the nuclear charge and atomic radii. Basically, the more electron shielding an atom has the less attracted it's outermost electrons are to the positive nucleus and thus the electrons are lost easier. From this we can deduce that the ionisation energy decreases as we go down the group. Above is a table to show the ionisation energies of the group 2 elements. Water and Group 2 Elements All of the elements in group 2 react vigorously with Oxygen, the product of which is an ionic oxide. The general formula for this reaction is MO (where M is the group 2 element). For example, Magnesium reacts with Oxygen to form Magnesium Oxide the formula for which is: 2Mg (s) + O2 (g) 2MgO (s) This is a redox reaction. All of the group 2 elements form hydroxides when reacted with water. The general formula for these reactions is M(OH)2 (where M is the group 2 element). Hydrogen is given off during these reactions. For example, Magnesium reacts with water to form Magnesium Hydroxide and Hydrogen gas in the following equation: Mg (s) + 2H2O(g) -> Mg(OH)2(aq) + H2(g) This is also a redox reaction.
||Santa Cruz de Tenerife, Spanien |Anzahl der Schüler: Work topic: Nutrition education; Healthy eating behaviours; Basic information about food vocabulary and reading labels adequately; in the main languages of the EU General description of the project: Reorienting education towards lifelong healthy habits; Nutrition knowledge including benefits of healthy eating, principles of healthy weight management through vocabulary related to food and health. Main objectives:To obtain basic vocabulary about food and healthy meals, at the same time learn how to manage human health nutrition and principles of healthy weight management. Activities that enable students to: - understand and use food labels - critically evaluate nutrition information and misinformation and commercial food advertising. - exchange recipes from different countries - analyze nutritional standards for school lunch
The fundamental pattern of early Christian worship continued to develop through the fourth and fifth centuries. However, “families” of liturgical practice began to emerge, and styles of worship varied from one Christian region to the other. By this time, one can begin to speak of “Eastern” and “Western” characteristics of Christian liturgy. Though the New Testament does not give any detailed information on the structure of the first Christian services, it leaves little room for doubt concerning the basic elements of primitive worship: prayer, praise, confession of sin, confession of faith, Scripture reading and preaching, the Lord’s Supper, and the collection. Early descriptions of Christian worship, such as that in Justin’s Apology, reveal a close similarity to the practice of the synagogue. Even without the synagogue model, however, the fundamental elements would surely have found a place, and distinctive Christian features would have their own origin.
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills. This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Managing Cyanobacteria (Blue-Green Algae) at Milavec Reservoir The Town of Frederick Parks and Open Space Department is monitoring and managing cyanobacteria at Milavec Reservoir through aeration, informational signs for recreational users, and working with upstream stakeholders and partners. Cyanobacteria blooms will happen, but through our efforts, we can manage the frequency and severity of annual blooms and inform recreational users of the potential impacts. When in doubt, it's best to keep out! Water contact can cause illness - Keep kids out - No pets in water - Do not drink water - Avoid contact with algae - If exposed, shower immediately Fishing Permitted: Rinse fish with clean water and properly dispose of guts Boating Permitted: Avoid algae Cyanobacteria FAQs for Milavec Reservoir What is Cynobacteria? Cyanobacteria, also called blue-green algae, are microscopic organisms found naturally in all types of water. These single-celled organisms live in fresh, brackish (combined salt and fresh water), and marine water. These organisms use sunlight to make their own food. In warm, nutrient-rich (high in phosphorus and nitrogen) environments, cyanobacteria can multiply quickly, creating blooms that spread across the water’s surface. The blooms might become visible. How are cyanobacteria blooms formed? Cyanobacteria blooms form when cyanobacteria, which are normally found in the water, start to multiply very quickly. Blooms can form in warm, slow-moving waters that are high in nutrients from sources such as agricultural and turf fertilizer runoff, municipal wastewater effluent and septic system discharges. Cyanobacteria blooms need nutrients to survive. The blooms can form at any time, but most often form in summer or early fall. What does a cyanobacteria bloom look like? You might or might not be able to see cyanobacteria blooms. They sometimes stay below the water’s surface, they sometimes float to the surface. Blooms sometimes look like paint floating on the water’s surface. Some cyanobacteria blooms can look like foam, scum, or mats, particularly when the wind blows them toward a shoreline. The blooms can be blue, bright green, brown, or red. As cyanobacteria in a bloom die, the water may smell bad, similar to rotting plants. Why are some cyanbacteria blooms harmful? Cyanobacteria blooms that harm people, animals, or the environment are called cyanobacteria harmful algal blooms. Harmful cyanobacteria blooms may affect people, animals, or the environment by: - Blocking the sunlight that other organisms need to live. Cyanobacteria blooms can steal the oxygen and nutrients other organisms need to live. - Making toxins, called cyanotoxins. They can make people, their pets, and other animals sick. - You cannot tell if a bloom has toxins by looking at it. How can people and animals come in contact with cyanobacteria and cyanotoxins in the environment? People and animals can come in contact with cyanobacteria and cyanotoxins that are in the environment by: - Drinking water that comes from a lake or reservoir that has a cyanobacteria bloom. - Swimming or doing other recreational activities in or on waters that have cyanobacteria blooms How do I protect myself, my family, and my pets from cyanobacteria blooms? To protect yourself, your family, and your pets from cyanobacteria blooms: - Don’t swim, water ski, or boat in areas where the water is discolored or where you see foam, scum, or mats of algae on the water’s surface. - Do not allow children or pets to play in or drink scummy water. - If you do swim in water that might contain harmful cyanobacteria, rinse off with fresh water as soon as possible. - Don’t let pets swim in or drink from areas where the water is discolored or where you see foam, scum, or mats of cyanobacteria on the water’s surface. - If pets, especially dogs, swim in scummy water, rinse them off immediately. Do not let them lick the cyanobacteria off their fur. Why do dogs get sick more often than people from cyanobacteria blooms? Dogs will get in a body of water even if it looks or smells bad, including when it contains cyanobacteria. Dogs are also more likely to drink contaminated water. How are people or animals that have been exposed to cyanobacteria toxins treated? If you or your pet comes in contact with a cyanobacteria, wash yourself and your pet thoroughly with fresh water. - If you or your pet swallow water from where there is a harmful algae bloom, call your doctor, a Poison Center, or a veterinarian. - Call a veterinarian if your animal shows any of the following symptoms of cyanobacteria poisoning: loss of appetite, loss of energy, vomiting, stumbling and falling, foaming at the mouth, diarrhea, convulsions, excessive drooling, tremors and seizures, or any other unexplained sickness after being in contact with water. How can you help reduce cyanobacteria blooms from forming? To help reduce cyanobacteria from forming: - Use only the recommended amounts of fertilizers on your yard and gardens to reduce the amount that runs off into the environment. - Properly maintain your household septic system. - Maintain a buffer of natural vegetation around ponds and lakes to filter incoming water. Is there testing for cyanobacteria toxins? Yes, but the testing is specialized and can only be done by a few laboratories. Scientists are working to develop toxin test kits for water resource managers and others. What is the Town of Frederick doing to manage these blooms? The Town is working towards a long-term solution for this important water quality issue, however, nutrient loads in Milavec Reservoir are extremely high, and will require significant investments of resources and time, in both upstream basin management and within the Reservoir body to alleviate frequent blooms. No management actions will permanently stop blooms, as cyanobacteria are natural to all waters and even with the best management efforts, can still bloom under the right conditions. The Town installed a significant aeration system within the Reservoir, to increase water movement. This will not stop cyanobacteria blooms, but does manage their frequency and severity. The Town is in discussions with upstream basin stakeholders to address nutrient loading issues. The Town does not plan to treat the blooms, as the treatments include the use of pesticides and/or inorganic compounds, such as copper sulface, both of which can have significant unintended impacts on the aquatic ecosystem and fishery and can impact raw water irrigation systems, which the lake supplies around Town. The Town is in the process of hiring a consultant to create a Master Plan for Frederick Recreation Area, which will also address long-term strategies to improve water quality for recreational use. For more information on cyanobacteria: For information on the Town of Frederick's management of Milavec Reservoir: Contact the Parks and Open Space Department - 720-382-5800
The scientific name of a sponge is Porifera, which means pore-bearing. Porifera. © 2004 Dlloyd What are Sponges? Sponges are the simplest form of multi-cellular animals. They are very diverse and come in a large variety of colours, shapes and structural complexities. They range in heights of 1-200cm and in diameters of 1-150cm. They have partially differentiated tissues, and not true tissues. Sponges don’t have internal organs. They don’t have muscles, a nervous system, or a circulatory system. Their walls are lined with many small pores called ostia that allow water flow into the sponge. The structure of a sponge is simple. One end is attached to a solid such as a rock while the other end, called the osculum, is open to the environment. Sponges are able to get microorganisms such as algae and bacteria for food through openings. Some sponges are carnivorous and use their spicules to capture small crustaceans. What are Sponges Made of? Sponges are made of four simple and independent cells. The first are the collar cells, which line the canals in the interior of the sponge. Flagella are attached to the ends of the cells and they help pump water through the sponge’s body. By pumping water, they help bring oxygen and nutrients to the sponge while also removing waste and carbon dioxide. The second cells are the porocytes, which are cells that make up the pores of the sponge. Epidermal cells form the skin on the outside of the sponge. Finally, the amoebocytes exist between the epidermal and collar cells in an area called the mesohyl. They carry out functions of the sponge and help transport nutrients. They also form spicules, which are the sponge’s skeletal fibers. They work together with the collar cells to digest the food for the sponge and produce gametes for sexual reproduction. What are Some Types of Sponge? There are four different types of sponges from different classes: Calcarea, Hexactinellida, Demospongiae, and Sclerospongiae. They are split into the classes based on the type of spicules they have. For example, spicules may be made of calcium carbonate or a spongin fiber. Where Do Sponges Live? Sponges live in all types of regions all over the region. They are able to thrive in most environments. 99% of all sponges live in marine water, but some sponges made of spongin fiber live in freshwater. Sponges can be attached to surfaces anywhere as deep as 8km in the ocean on the bottom of the ocean floor. There are a higher number of sponge individuals and sponge species in the tropics of all regions because the water is warmer. They like to live in clearer waters over murky waters formed by currents. The murky waters may often clog the pores on the sponges so the sponge cannot get its nutrition and oxygen to survive. What is their Importance to the Ecosystem? Sponges are important in nutrient cycles in coral reef systems. Scientists believe they may be important factors to changes in water quality, whether good or bad. Scientists analyze how fast sponges breathe and the amount of nitrogen they release while doing so. Sponges collect bacteria when they filter the water around them. These bacteria are believed to be able to do many things. First, these bacteria may be able to create forms of nitrogen from the nitrogen gas in the water that may be nutritional for the sponge. They may also be able to turn ammonium from the sponge’s breathing into nitrogen gas that is then released into the atmosphere. This process would lower excess nitrogen levels in coral reefs, also preventing harmful ecosystem changes. Scientists believe that the conversion of nitrogen gas into useful nitrogen is also beneficial to the survival of other organisms in the area. They are hoping to have discovered a pathway for the removal of excess nitrogen from coral reefs. What are Some Adaptations They Have to their Environment? Sponges are strong animals with dense skeletons that are well adapted to their environments. As they may live almost everywhere, they adapt to the regions and surfaces they grow in. Certain sponge species are adapted to freshwater environments. Their skeleton types allow them to live in either hard or soft sediments. Their pores allow them to filter the water around them for food. Inside the sponge, there are flagella that create currents so their collar cells may trap the food. Sponges may have adapted to these feeding habits from a long time ago when food sources may have been scarce. Sponges have strong structures that are able to handle the high volume of water that flows through them each day. By constricting certain of their openings, sponges are able to control the amount of water that flows through them. Scientists believe that sponges are colourful because the colours act as a protection from the sun’s harmful UV rays. Sponges have been around for a very long time. This is because although the world is constantly changing, sponges are still able to respond to these changes through adapting to their environment. Sponges are also able to release toxic substances into the environment around them to make sure they have a good place to grow in. How Do Sponges Reproduce? Sponges may reproduce sexually and asexually. This helps keep them alive in their habitats. Most sponges are both male and female. In sexual reproduction, they may play either role. The ‘male’ sponge would release sperm into the water, which would travel and then enter a ‘female’ sponge. After fertilization in the sponge, a larva is released into the water. It floats around for a few days and then sticks to a solid to begin its growth into an adult sponge. Sponges are also able to reproduce asexually through budding. This is when a small piece of sponge is broken off but is still able to survive and grow into another sponge. Sponges are also able to repair damages to their bodies. These characteristics of sponges are ideal because even small parts of sponges may survive in the water. Diversity is created when different sponges reproduce with other different sponges. Information on the Internet - Phylum Porifera description, structure, reproduction, body type, habitat - The Secret Lives of Sponges - Solving Mysteries on a Coral Reef importance to ecosystem - Sponge - Wikipedia, the free encyclopedia description, structure, reproduction, body type, taxonomy, pictures - Sponges description, structure, type, reproduction - Sponges - Phylum Poriphora description, structure, feeding, diagrams - Introduction to Porifera structure, type, taxonomy, phylogeny
The Power of Small Liquid Droplets It has long been known that water has the ability to erode some of the hardest materials on earth given enough time but the mechanics of how this occurs has only recently been studied. At the University of Minnesota Twin Cities, researchers have discovered why small liquid droplets cause the erosion of solid surfaces. Using a new analysis technique called high-speed stress microscopy that measures the force and pressure exerted on a surface as a drop contact it, the researchers have discovered that the force of a droplet spreads out as the drop deforms itself on impact. This deformation happens faster than the speed of sound, unleashing a shockwave along the surface of the solid object: “Each droplet behaves like a small bomb, releasing its impact energy explosively and giving it the force necessary to erode surfaces over time”. This discovery is expected to help material developers and engineers create better erosion-resistant coatings and materials that weather the elements better than current systems. University of Minnesota. “New study solves mystery of how soft liquid droplets erode hard surfaces.” ScienceDaily. ScienceDaily, 31 March 2022.
Christianity Ethics // sexual ethics The plan • How do you define Christian ethics? • Where does ethics ‘fit’ into the HSC Course? • Some worthwhile activities • Developing a good response for Section II and III – what’s the difference? • Examination tips Your turn: • In your book, write a definition of Christian ethics OR • write down some words that you associate with ethics in Christianity How do you define Christian ethics? • Ethics are a demonstration of a person’s beliefs in action. A system of moral principals by which human actions…may be judged as …right or wrong. (Macquarie Dictionary) • Christian ethical teachings are founded on an understanding of who the human person is and the belief that every person is capable of discovering and embracing goodness and truth. • Christian ethical actions are focused on maintaining • right relationship with God, one’s neighbour and oneself. Christian ethics • Christian ethical teachings are founded on • FAITH and REASON, PHILOSOPHY and NATURAL LAW • Protestant Churches would emphasise the authority of the Bible and conscience. Anglican, Orthodox and Catholic ethical authority is also found through the Church leadership- the Pope, Archbishops, Bishops or Patriarchs. Traditional Christian ethical teaching based on natural law or bible-based result in moral absolutes. Christian ethics • The model of Christ as giving his life in service is a key guide to action. • What are the key scriptural sources relevant to a study of Christian ethics (Hint: Preliminary Course)? Scriptural sources • All Christians would hold that teachings found in the Bible are the key sources for determining ethical behaviour. This is the revealed law. • Exodus 20: 2-17 The Ten Commandments • Matthew 5:3-10 The Beatitudes • Luke 10:25-27 Jesus Commandment of Love • What are the key scriptural sources relevant to a study of sexual ethics? Where does ethics fit in the SOR course? • SOR Outcomes: • A student - • •explains aspects of religion and belief systems • •describes and analyses the influence of religion and belief systems on individuals and society • •describes and analyses how aspects of religious traditions are expressed by their adherents • •evaluates the influence of religious traditions on the life of adherents (p10) Where does ethics fit in the SOR course? • Preliminary (learn to) • •explain why Jesus is the model for Christian life • •outline the principal ethical teachings in the the Ten Commandments • •the Beatitudes and • •Jesus’ commandment of love • •describe the importance of ethical teaching in the life of adherents (p21) Where does ethics fit in the SOR course? • The purpose of this section is to develop a comprehensive view of religious traditions as living religious systems that link directly with the life of adherents…. In a Religious Tradition Depth Study, the particular focus is on the ways in which a religious tradition, as an integrated belief system provides a distinctive answer to the enduring questions of human existence. The study of a particular religious tradition enables students to demonstrate an appreciation of the diversity of expression within, and the underlying unity of, the whole religious tradition. (p 3) Where does ethics fit into the sor course? • HSC (learn to) • •Describe and explain Christian ethical teachings on bioethics OR environmental ethics OR sexual ethics Planning your study of Christian ethics • •Definitions • •Practice questions • •Research beyond your textbook Definitions – visual aid Definition of Ethics Definition of sexual ethics Key sources of Christian teaching Approach of Christianity to the issue Significanceof the ethical issue to adherents Essay outline • Ethical issue for Christians • Key Biblical Sources • Relevance to the tradition and core beliefs • Examples of the sexual issue and response of Christians in those examples • Differences demonstrated by variants- use additional sources
Our entire food system needs some serious TLC. In order to keep up with sustainability, health and biodiversity, it is crucial to rehab some of our current agricultural and ecological practices — and regenerative agriculture might be just the solution. What is Regenerative Agriculture? If you are wondering what the regenerative agriculture definition is you have come to the right place. To put it simply, regenerative agriculture is a way of nurturing, or re-nurturing, soil so that soil organic matter can be rebuilt, and degraded soil biodiversity can be restored. The goal is to create, or recreate, soil that acts more like a sponge for the healthiest nutrients and minerals. Regenerative farming helps with growing nutrient dense foods. Regenerative agriculture practices also pull damaging CO2 from the atmosphere which helps combat climate change and increases clean water run-off. What are the Principles of Regenerative Agriculture? Over the years, our food has become significantly less nutrient-dense, more chemical-heavy and essentially less effective at providing all the vitamins and nutrients humans need on a daily basis. This is because healthy soil is key to healthy food. We know the urgency of the climate crisis. Regenerative agriculture can actually help in the fight against global warming and climate change by removing and absorbing carbon dioxide, also known as soil carbon sequestration. In fact, studies show that “it is clear that regenerative agriculture, as a diverse portfolio of practices that can be adapted to specific regions and crop types, can and should play a major role in tackling climate change, with the potential to remove 100-200 GtCO2 by the end of the century.” In short, regenerative agriculture has the ability, or potential, to fight or even reverse climate change while also addressing food insecurity, by increasing the quality and quantity of food accessible worldwide. With these potential benefits, we need positive changes immediately. There’s no time (or food) to waste. What are the Benefits of Regenerative Agriculture? Regeneration International, a non-profit organization dedicated to regenerative agriculture, believes that the process has the potential to help reverse global warming by drawing out the “tons of excess carbon already released into the atmosphere and sequester it in the soil.” Its far-reaching benefits could potentially include restoring farmers’ independence by “ending corporate control over the global food system”, regenerating ecological health, helping to revitalize local economies, and enhancing human health and well-being. This is in addition to restoring soil health. Regenerative Agriculture Defined Components of regenerative agriculture include avoiding chemical pesticides and advocating for more farm-friendly practices like the rotation of livestock and crops. It also involves no-till farming (growing crops without disturbing the soil with tillage — which results in a minimal disturbance to fields and organisms within them) and the improvement of composting practices. Currently, our world farming systems need great improvement — and fast. MegaFood and Regenerative Agriculture Regenerative agriculture is at the heart of our company. At MegaFood, many of our products contain foods from our most trusted farm partners who are as committed to sustainability and regenerative agriculture as we are. We’re out to change the world, starting with food and soil health using regenerative agriculture. In fact, we’ve partnered with The Carbon Underground and Green America, to develop a new, outcomes-based regenerative agriculture global verification standard for food grown in a regenerative manner in order to help farmers to restore the carbon cycle and build up soil health, increase crop resilience and boost nutrient density. We’ve also advocated for nutritional policy reform, to improve food insecurity worldwide, and are actively lobbying against harmful chemicals being sprayed not only on the foods we consume but in our local parks and playgrounds, as well. What Can You Do To Help? Regeneration International issues this warning and motivation to act: “According to soil scientists, at current rates of soil destruction (i.e. decarbonization, erosion, desertification, chemical pollution), within 50 years we will not only suffer serious damage to public health due to a qualitatively degraded food supply characterized by diminished nutrition and loss of important trace minerals, but we will literally no longer have enough arable topsoil to feed ourselves. To help further this mission and the tangible benefits of regenerative agriculture, you can seek out brands and organizations that are dedicated to using sustainable and regenerative agriculture practices. You can also begin or volunteer to help with a community garden, seek opportunities at local farms and educate your friends and family on the importance of regenerative agriculture and why it matters.
3rd - 5th Grade Teachers build on basic skills students learned in earlier grades to help students apply higher-level thinking to all subjects. Improved reading comprehension opens doors to a deeper understanding of ideas, topics and events. Socially, children begin to internalize values like respect and cooperation, and learn more personal responsibility. Most importantly, students build the skills they need to succeed in middle school. English Language Arts English Language Arts During the upper-elementary years, there’s a comprehension switch that happens in your child’s brain. They move from learning to read to reading to learn. As fluency improves, students learn to apply a variety of comprehension skills to texts, including making inferences and drawing conclusions. They write more complex stories, essays, and poetry, while learning important grammar skills that will contribute to their writing style. Our teachers use tools in line with state standards and ELA best practices to support instruction. In third through fifth grade, students build on their understanding of addition, subtraction, and place value to extend their thinking to multiplication, division, and fraction concepts. Our curriculum actively engages students in learning experiences that foster critical thinking and a deep understanding of mathematics through the use of visual models and real-world problem solving. A child’s love of science and engineering can really bloom between third and fifth grades as they have an opportunity to dig deeper into life science, physical science, and earth and space science. Children will have the ability to apply their learning to the real-life applications of technology and engineering, while continuing to develop key practices such as communication and collaboration that will support them throughout their education. A child’s awareness and interest in the world around them begins to expand in the years before middle school. They study the values and principles of American democracy, and their responsibilities as citizens. They learn about the structures and functions of government, economics, human systems and the birth of our nation by exploring their state’s history from settlement to the modern world and American History from indigenous communities to the Revolutionary War. They also explore the world’s countries and continents by examining the past and making connections to our present, learning how we are both similar and different to those in our own communities and around the world. Students study art, music, physical education and technology. We're More than Academics Strong hearts and minds impact the world in amazing ways. Virtues such as respect, perseverance, compassion, and courage are essential to what kids need to succeed and an essential part of what we teach. We make them part of every school day. This helps students learn the importance of making good decisions and doing the right thing in life. Moral Focus is so integrated into everything we do: - We created a curriculum specifically around Moral Focus. - We celebrate a “virtue of the month” every month. - Our students create a class contract where students agree how to treat their teacher and each other. - Students are recognized and celebrated for how they live out Moral Focus every day. I love the emphasis on moral focus each month. Not only is my child receiving an education, but you are also teaching her qualities to be a good person. Use good judgment to make decisions Treat others the way you want to be treated Appreciate the kindness and generosity of others Manage your emotions and behavior to stay calm Persist through difficulties until you reach your goal Despite fear, do what you believe is right Motivate others to have confidence in themselves Help and care for others during times of need Do what is right, honorable, and good A Deeper Dive Into Moral Focus Over the years National Heritage Academies has seen how our moral focus emphasis in the classroom makes our schools safer, more caring places. We trust these important personal skills spill out into our students’ homes, communities, and futures. Keep reading to learn more about the benefits that come from developing a strong moral backbone and why NHA takes them to heart.Continue Reading
CLICK HERE TO ACCESS Sign up to receive 10 ready-to-use ELA resources your students will love! The Giver by Lois Lowry is a teacher’s dream novel. The complex dystopian plotline, dynamic characters, and thought-provoking themes provide so many opportunities for teachers to foster text-to-self and text-to-world connections. Critical thinking activities that allow students to empathize with the characters are a must-have in any novel unit. Below are 8 of my favorite activities for The Giver that do just that. This first activity is always a class favorite. It allows students to empathize with Jonas and his friends as they are assigned careers by the Chief Elder during the Ceremony of 12. Welcome students to the classroom with a colorful poster for The Ceremony of Twelve. Once they are all settled, immediately transform into The Chief Elder. Address the class explaining that although they have spent the last 11 years learning to fit in and standardize their behavior, that this ceremony will celebrate their differences. Then, one-by-one present each student with their new job and a designated card that states all of the roles and responsibilities. After each student gets their assignment, have the rest of the class say in unison, “Thank you for your childhood.” Give your students a choice of assignment. They can either fill out an application for a job switch or write a journal discussing their feelings on their new role in the community! In The Giver, Jonas has the capacity to ‘see beyond.’ This means that Jonas, unlike the other members of the community, can use his senses from memory that allow him the ability to see color. This fun, seeing beyond class activity allows students to step into Jonas’ shoes to understand his ability to see beyond. Students enter the classroom to a colorful poster welcoming them to Seeing Beyond. Ask them to circulate the room to different areas that have hidden image optical illusions. Some will be able to see the hidden pictures, while others will not. After the activity, students work with partners to discuss how they felt when they were or were not able to see the hidden image. They will also discuss how it felt to successfully or unsuccessfully help someone else see the image and how this relates to the novel. Through his role as The Receiver, Jonas receives transmitted memories of the past from The Giver. This FREE memory transmission activity allows students to empathize with both Jonas and The Giver as they will both receive and transmit memories. This one has always been a real hit with my students! Put a colorful poster on the door welcoming your class to The Giver’s Annex. Then, transform into The Giver and give each group of students descriptions of new memories that Jonas will receive. Some of the memories involve painful memories, like homelessness, while others involve more positive memories like Neil Armstrong’s arrival on the moon! Students discuss prompting questions that will have them understand the value of keeping the world’s memories safe. After all the memories have been transmitted, they will shift into the role of The Giver. In this role, they will transmit one important historical memory to Jonas of their choosing. The elderly in The Giver are seemingly treated with the utmost respect and care in The House of Old, but the reader soon learns that things are not as positive as they appear. The elders of the community are killed (a.k.a released from society). This activity allows students to examine how the elderly are treated in different cultures/countries in the world and how this compares to how they are treated in Jonas’ community. Students will enter the classroom to a colorful poster welcoming them to The House Of Old. They participate in small group discussions with information cards that provide details about how the elderly are treated in different cultures. When they are done, they fill in the blank card with how the elderly are treated in the novel and share with the rest of the class! In Jonas’ community, everyone must share any dreams they have with their family members. On the surface, dream sharing seems like a good way to keep open communication about inner feelings. In reality, however, it is another way that the government can keep control of the thoughts of their citizens and squash any independent thinking. This activity allows students to interpret their own dreams and consider what deeper meaning their dreams may have. After reading chapter 6, a poster welcoming them to Dream Sharing greets students at the door. Break the class up into groups of 4 and tell each group to imagine they are family members. Each group receives dream prompt cards with common topics for dreams that have symbolic meanings. Each student shares a dream they remember which connects with one of the topics. If they can’t connect with any topic, they can share any dream they remember. After everyone has shared their dreams, give each group the Dream Interpretation Cards that explain the symbolic significance of each dream topic. Students discuss and reflect on how it felt to reveal a dream and consider whether or not this would be a good practice in their everyday life. In Jonas’ community, members are sheltered from feeling any physical or emotional pain. While this theoretically seems like a peaceful way to live, Jonas soon learns that feeling no pain desensitizes people and doesn’t allow them to appreciate positive emotions. From pain, people are also able to learn from mistakes and avoid making those same mistakes again in the future. This activity brings this idea to the forefront by showing students a real-life example of someone who feels no pain. Students work in groups to read information about people who feel no physical pain. You could have them research Gabby Gingras or Ashlyn Blocker, for example. As a group, students discuss whether or not they would like to live a life without physical pain and what challenges they might face if they chose yes. Then, they work with their group to brainstorm a list of advantages and disadvantages to living a life free of emotional pain. Jonas and his family participate in a nightly ritual called The Telling of Feelings where each person describes an emotion that they experienced during the day and discusses it with the others. Help students understand what this ritual would be like by forming classroom families and simulating the practice. After reading chapter 2, put students into groups. It is preferable that groups consist of two boys and two girls, but it isn’t necessary. Tell them that the group is their new family and they are to assign roles (parents and siblings). Each student gets a “Feelings Card” that they fill out in preparation for the ritual. Students must choose a precise word that describes a feeling they had that day. Each member of the group shares their feelings while the other members listen carefully. After the ritual, have students discuss whether or not they could see themselves doing this with their family, if it would make a family closer, and why they think this is a required ritual in Jonas’ community. In The Giver, couples can only have 2 children as mandated by the government. While this may seem completely removed from the modern-day, this activity will teach students about China’s one-child policy and allow them to consider how it relates to the novel. This activity works best with a bit of pre-reading discussion. Students discuss how they would react if the government limited the number of children they could have. Ask them if they think this could or would ever happen. After some discussion, have them read an article or watch a video on China’s one-child policy. I have students record their thoughts as they read using a graphic organizer. The one I use has them consider their thoughts, what they learned, and something that surprised them. Ask students to make a connection between this policy and the events of the novel. Grab a ready-to-use unit plan with over everything you need to teach The Giver (340 pages/slides of eye-catching powerpoints, printable assignments, questions, vocabulary, and interactive class activities) by clicking here. I hope you found this helpful! If you are interested in more tips and resources for developing students’ reading skills in ELA, click here. Search the blog for what you are teaching *please note that signing up may reduce lesson planning time. sent straight to your inbox!
The principle behind heat pumps is based on the first law of thermodynamics, whereby “Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter.” 1 This means that heat can be created from a colder source and vice-versa when energy external to the original system is applied. Ultimately heat can be extracted from anything that is above absolute zero (−273.15°C or −459.67°F). In this capacity heat pumps can operate as both heating or cooling devices by either transferring a temperature from one area to another, expanding/contracting a substance, or both. There are three basic types of heat pumps. These are: Air source Heat Pumps An air source heat pump uses the outside air to heat or cool a building. When used to heat a building this is achieved by transferring heat inside from the outside air, and when used to cool a building this is achieved by transferring heat from inside to the outside air. To achieve heat transfer in either direction, air source heat pumps use a system that includes a heat exchanger, a compressor and a means to transfer heat from one area to the other, e.g., pipes filled with a refrigerant. The heating process starts with a cold refrigerant that is moved outside where it becomes heated by a combination of outside air being blown by a fan onto refrigerant coils, and a compressor that further increases the temperature through compressing the refrigerant. The heated refrigerant is then moved indoors where it passes through another set of refrigerant coils (heating coils) where another fan extracts the heat from the coils by blowing air on it. The heated air can then be distributed about the building through air ducts. Lastly the refrigerant is passed through an expansion valve that cools it down to begin the cycle all over again. The cooling process is virtually the reverse of the heating process, whereby a reversing valve near the compressor changes the direction of the refrigerant flow. The efficiency of air source heat pumps is generally higher than traditional boilers and electric heating, which means that over the long term they will cover their investment. Air source heat pumps are driven by electricity, and systems exist that are powered by solar panels, making them both clean and energy efficient. Air source heat pump Absorption Heat Pumps Absorption heat pumps work similar to air source heat pumps but instead of using electricity to compress a refrigerant, they use heated water generated from solar boilers, geothermal resources or natural gas in combination with an absorption pump and a pressure pump. The absorption pump absorbs ammonia or lithium bromide into water. This mix is then pressurized by the pressure pump. The ammonia or lithium bromide is then boiled out of the water by the heat from the heated water creating heat that can be used inside. However, unlike air source heat pumps, absorption heat pumps are not reversible. Absorption heat pump Ground Source Heat Pumps Ground source heat pumps use the constant, well insulated temperature that exists just below the ground or in a body of water, e.g. a pond, to transfer heating or cooling to a building. This is accomplished by transferring heat or cold from below the ground via underground piping that contains a refrigerant. There are several variations to this, including: - Direct exchange - Closed loop - Open loop - Standing column well The direct exchange system is the simplest, most efficient and also least expensive. It involves a heat pump that circulates a refrigerant through underground copper pipes, where heat is transferred from the ground to the refrigerant through the copper piping. Although this system is limited by the thermal conductivity of the ground, its lack of additional mechanisms, e.g., water pump and heat exchanger, make its overall energy efficiency very high. The closed loop system involves two sets of piping, one that contains water and anti-freeze and passes below the ground to absorb heat and transfer it through a heat exchanger to the second pipe that contains refrigerant and is in contact with the heat pump that distributes the heat throughout the building. This system also requires a water pump to move the water and anti-freeze below the ground. The name “closed loop” comes from the fact that the liquids in both piping systems remain contained, without being refreshed, i.e., they are continuously reused. Closed loop variations include distributing the underground pipes either vertically, horizontally or under water. Determining which method is best depends on factors such as cost, availability of land, underground geology and proximity to water. Horizontal is cheaper than vertical, but requires more land, and wet environments are best for transferring heat. An open loop heat pump operates like a closed loop system in that it uses two piping loops with a heat exchanger. The difference is with the underground loop, that, instead of reusing the same liquid, accesses water from an underground source or pond. In this case water is continuously renewed throughout the loop. An open loop system is only practical where there is easy access to water. Issues with this type of system include pipe contamination from minerals in the water, and also the possibility that such a system may drain or contaminate natural aquifers or wells. The economics of heat pumps is not straight forward for a number of reasons. These include: - Variations in competitive pricing for conventional energy - Poor design and installation of the system - Climactic factors However, if done properly a heat pump system will be one of the most efficient and effective heating systems available, with minimum yearly maintenance costs and a lifespan of between 25 and 200 years. Heat pump systems typically pay for themselves within 1 to 10 years (depending on the type of system), making them an extremely sound investment. Because of the complexity of selecting and installing a heat pump system, it is recommended that the entire process be done in conjunction with experienced experts. Various official organizations exist to assist with this, including: - The International Ground Source Heat Pump Association (IGSHPA) - Geothermal Exchange Organization (GEO) - Canadian GeoExchange Coalition
How Many Earths Would it Take to Sustain Humanity? Let’s talk about “Earth Overshoot Day”. It is a very important day but not one to celebrate, unless it happens on December 31 (or not at all). It is the day on which we have used up all of the resources that the Earth can replenish in one year. So every day after Earth Overshoot Day we are borrowing resources against our futures, accumulating ecological debt that we cannot repay. The first Earth Overshoot Day was December 29, 1970. Prior to that we were living within our ecological means. That’s not to say we were not doing any harm to the planet, only that we were using nature’s resources at a rate slow enough that the Earth could replenish them. Since 1970, Earth Overshoot Day has moved steadily earlier, reaching July 29 in 2018 and 2019. And then we found ourselves in a global pandemic and we saw the best Earth Overshoot Day we have seen since 2005. In 2020, Earth Overshoot Day fell on August 22. That’s 3.5 weeks later than in the two previous years. One of the positives to come out of a difficult and unpredictable year has been that we have had to change our habits and adapt. For many of us it has served as a reminder of just how adaptable we are as a species – and the Earth has breathed a little easier as a result. Air quality has been higher, water pollution has been lower, and sensitive ecosystems have been restored (1). Now that we have had this accidental demonstration of the positive environmental impact we can have through collective action, we propose that we keep the momentum going by examining our personal Overshoot Days a little more closely – and then making lifestyle changes to improve them! Our personal Overshoot Day is the day on which we would have used up all of the resources that the Earth can replenish in one year if everyone on the planet lived like us. We can also calculate Earth Overshoot Day for countries – Canada’s is currently March 14, meaning that if everyone in the world lived like an average Canadian we would use up the Earth’s natural resources for the year in less than 3 months! Global Footprint Network has prepared an online quiz to help estimate your personal Overshoot Day based on your lifestyle and habits. Head on over to take the quiz, or read on to learn more about the justification behind the questions. Take the first step with the Global Footprint Network’s calculator Animal-based products: The food we eat is a major contributor to our carbon footprints. Food production from farm (or factory) to table takes a lot of water, energy, and resources and is a big emitter of greenhouse gases. There are ecological impacts from fertilizers and pesticides, irrigation, clearcutting of forests to make space for livestock grazing or agriculture, transportation of food, transportation of fertilizers and pesticides, production and disposal of packaging, and disposal of food scraps and waste. On average, consumption of animal products has a higher carbon footprint than consumption of plant-based products. There are two main reasons for this: 1) When we consume animal products, our ecological footprint includes all of the water, energy, resources, and land that went into producing THAT animal’s food. The food and water consumed by livestock is not all converted directly into the animal products that we eat. The animals use much of the energy from their food for their own metabolism, to allow them to do things like walking, breathing, digesting, and producing body heat. It is therefore much more energy- and resource-efficient to eat plant-based foods directly rather than growing food to feed to livestock, or clearcutting forests to allow them to graze. 2) Ruminant animals such as cattle (cows) and sheep produce high levels of methane through their burps and flatulence due to their unique digestion. Methane is shorter lived than carbon dioxide but about 30 times more potent as a greenhouse gas! Beef and lamb are therefore two of the highest emitters of GHGs out of all of the foods we eat. Reducing or eliminating consumption of animal products can have a big impact on our carbon footprints. For the biggest impact, reduce consumption of beef and lamb. Food packaging, processing, and transportation: Part of our food-related carbon footprint comes from the packaging, processing, and transportation of the food. Packaging takes energy and resources (often fossil fuels to create single-use plastics) to make, and takes more energy and resources to recycle or goes straight to landfill. Processing of foods takes energy, plus the resources to build and maintain factories. Transportation of food uses fossil fuels which emit greenhouse gases and harm air quality. We can reduce our carbon footprints by choosing unprocessed, unpackaged, locally grown foods more often. When unpackaged foods are not available, we can reduce packaging by buying the biggest sized package that we will use within the product’s shelf life. Buying the big package to reduce packaging but then throwing most of it away defeats the purpose. Living in Canada we do not have access to a large variety of fresh, local produce year-round. In the winter months, we can try consuming some of our fruits and veggies as preserves made from local produce while it’s in season, whether we purchase from local farms or artisans or try making them ourselves. Want to reduce your carbon footprint without giving up all that delicious fresh produce? Start by switching to locally-grown for highly perishable foods! Choosing foods that have long shelf lives and came from afar means that they likely traveled by ship, rail, or road, as opposed to foods with short shelf lives which must travel by plane to try to beat the clock and get to us before they rot. Air transport has a much higher carbon footprint than sea transport, emitting approximately 100x as much GHG (2). Some of the foods most likely to come to us by air are asparagus, green beans, and berries. Housing: Where we live is another big contributor to our carbon footprints. If we live in a home with electricity and running water, we are responsible for the footprint associated with our share of the infrastructure required to support these amenities. If we live in a cool climate, it is more efficient to heat homes that share walls or ceilings, such as row houses or apartment buildings. Building materials such as wood, brick, and steel take more energy and resources to produce than straw or bamboo. If multiple people live in our home, the footprint associated with the home’s heating and infrastructure is divided among them. A home that is well-insulated and uses passive methods of heating and/or cooling (such as opening and closing blinds or windows at strategic times) is heated/cooled more efficiently. If our homes are powered by renewable energy sources then our carbon footprints are lowered. In Canada and other countries with colder climates for at least part of the year, heating our homes is one of the biggest contributors to our GHG emissions, so the method by which we heat our homes can make a big difference to our footprint. (In case you are wondering when you get to these questions, Ontario currently gets 33.4% of its electricity from renewable sources). Waste: When asked about waste, be sure to click the option to add more details to improve accuracy. This question takes into account way more than just the garbage you take down to the curb! Our garbage footprints include the obvious items such as food packaging, but they also include much more. Almost every item that we buy is destined for the landfill someday. We may sell or donate our old stuff, or tuck it away in a basement or attic in case we want it again someday, but that only delays its arrival at the dump. So, our garbage footprints include our clothing, furniture, sporting goods, electronics, toys, and more. Even recyclable items often 1) end up in a landfill due to contamination or lack of demand for the recycled product or 2) get “downcycled” into an item which will then end up in a landfill. Electronics in particular have a high footprint resulting from the mining of the rare metals that allow our devices to function. One of the best and easiest ways to reduce the impact of our waste on our carbon footprints is to consume less: - Make your clothes last longer by buying quality if your budget allows, avoiding impulse purchases, and resisting the urge to keep up with every new trend. Avoiding “fast fashion” stores is a great place to start. If you’ve never heard the term “fast fashion”, think cheap, disposable clothing. For more on fast fashion check out the documentary “The True Cost”). - When you start thinking of updating your furniture or decor, try making your existing furnishings last another month (or 6, or 12…) - Don’t buy into “perceived obsolescence”, a marketing ploy aimed at making us feel like we need to have the latest technology as soon as it’s available. Instead, try using your devices (such as phones and tablets) for as long as they work well. - Say no to freebies. Do you really need another branded, oversized T-shirt? Another pen? A business card holder for your cellphone? When we can’t (or don’t want to) consume less, we can still reduce the carbon footprint associated with our stuff by buying secondhand whenever possible. Thrift stores and consignment stores are a great option, and can be a budget-friendly way to find gently-used, quality items. There are also many online buy-sell groups that may be worth checking out. A common misconception is that buying secondhand (or selling/donating your stuff) saves that item from the landfill. It doesn’t – that item is still headed to the dump. What it does is prevent the manufacture of another item, saving that item from the landfill by preventing its existence. This means that buying secondhand only has a positive impact on your carbon footprint if you are buying something secondhand that you would have otherwise purchased brand new. Transportation: Transportation is another major contributor to our carbon footprints. Travelling by car emits GHGs. Travelling by public transportation does as well, but on a well-used system it works out to much less per person. Walking or biking are great ways to lower your carbon footprint if your ability and situation allow! Bonus: they can also contribute to a healthy lifestyle. Air travel has a huge carbon footprint, especially at night and in the winter (yes really – go ahead, Google it!) Reducing the amount of time we spend flying has a big, positive impact on our carbon footprints. Sometimes air travel is difficult to avoid, such as to visit family living far away or when it’s required for work. And sometimes we don’t want to avoid it because we want to see more of the world or get away for a while. If any of these apply to you, you can still reduce your carbon footprint by: - Taking flights during the day and avoiding winter air travel if possible - Going longer between trips if possible - Speaking to your professional contacts about meeting virtually rather than in person - Choosing the closer destination when debating between potential travel destinations Here’s that link again to take the test and find out your personal Overshoot Day. We would love to hear from you about your results, and whether you decide to make any changes to your habits to reduce your carbon footprint. Did you try any of our suggestions? Tell us about it! Did you try something else? Tell us about that too! When we all start making these kinds of changes it will help us Move the Date. Let’s do our part to get Earth Overshoot Day to December 31. And then have a big, carbon-neutral celebration. - Rumea, T. and Didar-Ul Islamb, S.M. 2020. Environmental effects of COVID-19 pandemic and potential strategies of sustainability. Heliyon, 6(9): e04965. - Poore, J. and Nemecek, T. 2018. Reducing food’s environmental impacts through producers and consumers. Science, 360(6392): pp. 987-992.
- 1 How do you make a line graph in science? - 2 What are the two types of line graph? - 3 What is an example of a line graph? - 4 How do you draw a graph in biology? - 5 What is a straight line on a graph called? - 6 How do I make a graph? - 7 How do you label a graph? - 8 Do graphs start at 0 GCSE? - 9 Is a line graph dot to dot? - 10 What do line plots mean? How do you make a line graph in science? Drawing Scientific Graphs - Give your graph a descriptive title. - Ensure you have put your graph the right way around. - Determine the variable range. - Determine the scale factor of the graph. - Label the horizontal and vertical axes with units clearly. - Remove any outliers. - Draw a line of best fit. What are the two types of line graph? There are 3 main types of line graphs in statistics namely, a simple line graph, multiple line graph, and a compound line graph. Each of these graph types has different uses depending on the kind of data that is being evaluated. What is an example of a line graph? A line graph, also known as a line chart, is a type of chart used to visualize the value of something over time. For example, a finance department may plot the change in the amount of cash the company has on hand over time. The line graph consists of a horizontal x-axis and a vertical y-axis. How do you draw a graph in biology? How to make a graph - Identify your independent and dependent variables. - Choose the correct type of graph by determining whether each variable is continuous or not. - Determine the values that are going to go on the X and Y axis. - Label the X and Y axis, including units. - Graph your data. What is a straight line on a graph called? The formal term to describe a straight line graph is linear, whether or not it goes through the origin, and the relationship between the two variables is called a linear relationship. Similarly, the relationship shown by a curved graph is called non-linear. How do I make a graph? Create a chart - Select the data for which you want to create a chart. - Click INSERT > Recommended Charts. - On the Recommended Charts tab, scroll through the list of charts that Excel recommends for your data, and click any chart to see how your data will look. - When you find the chart you like, click it > OK. How do you label a graph? The proper form for a graph title is ” y-axis variable vs. x-axis variable.” For example, if you were comparing the the amount of fertilizer to how much a plant grew, the amount of fertilizer would be the independent, or x-axis variable and the growth would be the dependent, or y-axis variable. Do graphs start at 0 GCSE? While it’s a good idea to have best practices with displaying data in graphs, the “show the zero” is a rule that clearly can be broken. But showing or not showing the zero alone is not sufficient to declare a graph objective or conversely “deceptive.” Is a line graph dot to dot? They are the same thing! Line plots and dot plots show how data values are distributed along a number line: For some reason, the Common Core Math Standards call them line plots in the standards for grades 2 through 5, and dot plots in grade 6 onward. What do line plots mean? A line graph—also known as a line plot or a line chart—is a graph that uses lines to connect individual data points. A line graph displays quantitative values over a specified time interval.
List comprehensions are a powerful tool for building lists. A list comprehension is similar to a generator expression, except that instead of returning a single value, the list comprehension returns a whole list. The first way we learn about list comprehensions is in our introduction to programming course. What happens here? We create a list comprehension that says “for each number in, get the squared value of that number”. As we know, if we wanted to do this without using a list comprehension, we could use nested loops. A general statement for doing this would be: for x in range(0,n+1): y= n * x Let's take a closer look at how Python handles these types of things. For simple cases, a list comprehension can often be simplified into a regular function call, or even just a simpler loop. Let's say we want to find out what the sum of numbers from 0–9 is. Here's the equivalent code: But, let's say we want to know the sum of positive numbers only. In this case, we need to filter the values before adding them together. We might think we could add a condition to check if each item was positive, then return the total, but that doesn't work! Instead we'll have to use a list comprehension to build the sum. Using the same formula from above, we'd write: Here, we're starting off with a list comprehension that says, "take the first item in the range object". Then, we're saying, "for each item following that, multiply it by the previous item." If we were to try this code, we'd see an error message telling us that the index out of range error was thrown. To fix this issue, we simply replace with. This means, "start off with a list containing just the range object," then iterate over each item inside of the range object. This gives us a working solution: Generator expressions are a way to create lists without using a loop. You use them when you want to iterate over something and return each item in turn. Dictionaries are data structures that map keys to values. In Python, they are implemented as associative arrays. Tuples are immutable sequences of objects. They are created by separating items with commas. Sets are unordered collections of unique elements. They are created by putting elements into a collection. Strings are sequences of characters. They are created by joining together strings with the + operator. Functions are self-contained units of code that perform some task. They are created by defining a function definition inside parentheses. 1. What do we call these? A sequence is a list of numbers that starts at one then goes to some number and repeats. In Python, they are called lists. There are many ways to create lists in python. You can start with an empty list via (the square brackets) or just type in some comma separated values. We'll use both methods throughout our code examples. 2. How to create a list. These items were listed under the header “shopping cart”, which was created using a list comprehension. A list comprehension is a way to create a list from an expression. Here's how: Here, we're adding several things to the front of our list, and then appending them to the back. So here's what that looks like: Next I'm going to take the second item in the list and put it in quotes. Here's where all of our items get concatenated together. Python sequences help us find patterns in data-sets. We can use them to look for trends and patterns in numbers and data we collect. To start off, let's begin by looking at each number individually, and then how to identify if they have any pattern. for i in range (len(data)): if any(x y for x,y in zip(data,data)): print("There is some sort of pattern") As you can see, we first list our set of data inside brackets, and then loop over the length of our sequence. Once we've done this, we can now add in a conditional statement to check if anything matches. In this case, we're checking if any of the values match a value within 10 units away. If any do, we would print out the word "there", otherwise, we would print out "no pattern." You'll notice that I'm using the format command, which does exactly what its name implies. Its arguments are the string value you want to output on screen, followed by a placeholder for where the value should go. So, my code above outputs "54" then "73", then "87", and so forth. A sequence contains elements, each of which is an object of some type. You assign values to its attributes via dot notation. You access individual elements using square brackets. for num in nums: This prints 1 2 3 out. Each time you loop over a list, you get a new copy of the exact same thing. Elements cannot change the values of other elements, only themselves. If you want elements to change, you need to make copies first.
Personality disorders: Types, causes, and treatments Reviewed by Theresa Fry Written bytherapist.com team Last updated: 10/13/2022 Your personality consists of the individual way you think, feel, behave, and interact with others. It’s informed by your genetics, environment, culture, family, and past experiences. A healthy personality allows you to be yourself while still responding to stress and change in helpful ways. A personality disorder is a form of mental illness characterized by a pervasive and enduring pattern of thinking, feeling, and behaving that leads to significant distress or impairment in a person’s life. These traits are exacerbated by stress and change. Not every personality disorder is the same. Psychologists have identified three main types of personality disorders, categorized as clusters. The disorders listed in each cluster share certain characteristics but require separate diagnoses. Cluster A personality disorders are characterized by eccentricity and suspicion. People with cluster A personality disorders may have odd or paranoid thoughts, cold or inappropriate feelings, or hostile or distrustful behaviors. Cluster A personality disorders include: - Paranoid personality disorder: Unfounded or unjustified suspicion of others that interferes with a person’s ability to function and maintain relationships - Schizoid personality disorder: Emotional and social detachment and a preference for isolation - Schizotypal personality disorder: Peculiar ideas and behaviors that make it difficult to relate to others; often characterized by “magical thinking,” such as the belief in telepathy and clairvoyance Cluster B personality disorders are characterized by erratic, unpredictable, or overly emotional ways of thinking and behaving. People with cluster B personality disorders may disregard the emotional needs of others in an attempt to satisfy their own. This can lead to manipulation, lack of empathy, and irresponsible behavior. Cluster B personality disorders include: - Antisocial personality disorder (ASPD): Lack of remorse and disregard for other people’s rights and feelings; may also include a diagnosis of psychopathy or sociopathy - Borderline personality disorder (BPD): Extreme moods, unstable relationships, and risky behaviors that may endanger yourself or others - Histrionic personality disorder: Excessive or exaggerated emotional expression for the purposes of receiving attention - Narcissistic personality disorder: A debilitating level of self-importance, arrogance, and selfishness, along with a lack of empathy for others Cluster C personality disorders are characterized by high levels of anxiety and fearfulness. The specific fears that characterize each cluster C disorder differ widely and may include the fear of rejection, the fear of being alone, and the fear that things won’t be done correctly. Cluster C personality disorders include: - Avoidant personality disorder: Excessive shyness, sensitivity to criticism, and low self-esteem - Dependent personality disorder: Lack of self-confidence and extreme dependence on others to the point of excusing or tolerating abuse - Obsessive-compulsive personality disorder (OCPD): Excessive preoccupation with perfectionism, order, rules, and a need for control; different from obsessive-compulsive disorder (OCD), which is characterized by compulsive rituals and obsessions and is not a personality disorder Beyond the three main clusters of personality disorders, there are two other diagnoses you may receive: - Other specified personality disorder: When a person has symptoms of one or more personality disorders but not enough to warrant a specific diagnosis; is followed by a specifier (e.g., “with mixed personality features”) that describes the rationale for this diagnosis - Unspecified personality disorder: Similar to other specified personality disorder, this diagnosis is made when a person has symptoms of one or more personality disorders but not enough to warrant a specific diagnosis, but in this case, the clinician does not know (or chooses not to specify) the reason for a more specific diagnosis Potentially. Although there is no specific, single cause for personality disorders, genetics are thought to play a major part. According to the Merck Manuals, personality disorders have an estimated 50% heritability level, which is similar to other mental health disorders. Although your genetic history may affect your likelihood for developing certain personality disorders, it isn’t the only factor at play. Other factors may increase your risk for personality disorders, such as: - Biochemistry: There’s evidence to suggest that brain structure and the balance of chemicals in the brain may increase a person’s risk for personality disorders. - Abusive or traumatic childhood: Your environment has a significant effect on your mental health, especially in childhood. If you grew up in an abusive, traumatic, neglectful, or unpredictable household, you may be at a greater risk for developing a personality disorder. - Childhood mental illnesses: Being diagnosed with conduct disorder or oppositional defiant disorder (ODD) as a child increases your risk for being diagnosed with certain personality disorders later in life, particularly antisocial personality disorder. Is a Personality Disorder a Mental Illness? Yes, personality disorders are a type of mental illness. “Mental illness” is an umbrella term covering a variety of mental health conditions, such as mood disorders, developmental disorders, trauma-related disorders, anxiety disorders, and psychotic disorders—to name a few. It’s important not to self-diagnose your distress. Learning about personality disorders and other mental health conditions can only take you so far. In order to be certain what is causing your symptoms, you will need to receive a diagnosis from a mental health professional. Click here to find a therapist near you. Personality disorders cannot be cured, but they can be treated. People with personality disorders often don’t recognize that their thoughts, feelings, and behaviors are causing distress or impairment. However, they may seek treatment if their condition significantly impairs their ability to function. For example, people with personality disorders may seek treatment if their behavior jeopardizes their job or their relationships. The good news is that there are many effective options for treating various personality disorders, such as: The efficacy of a certain therapy will likely depend on the specific type of personality disorder you’ve been diagnosed with. Click here to find a therapist near you and get a diagnosis so you can get started with treatment. Medicine may be an additional component added to your treatment plan alongside therapy. There is no medication that treats personality disorders specifically; however, certain medications may alleviate some symptoms. Common medications prescribed for people with personality disorders include: - Antidepressants: Alleviate symptoms of depression, low mood, irritability, and anger - Mood stabilizers: Stabilize severe mood swings and reduce impulsive anger and aggression - Antipsychotics: Reduce impulsive aggression and address difficulties associated with losing touch with reality - Anti-anxiety medications: Alleviate symptoms of anxiety and agitation Personality disorders are often difficult to treat because they affect an intrinsic part of who we are: our personalities. It is difficult to change what is fundamentally a core feature of our being. In addition, personality disorders are ego-syntonic, meaning that the person with the disorder is often unaware that they have a problem. Instead, they may view others as the problem, which can limit their motivation for treatment. However, just because personality disorders are difficult to treat, that doesn’t mean that treatment and improvement is impossible. The therapies and medications listed above can be effective for those who seek help. Click here to find a personality disorder therapist near you so you can get started on getting better. Multiple personality disorder, known today as dissociative identity disorder (DID), is real—but not in the way you may think. The Hollywood depiction of DID as multiple people inhabiting one body is false. Instead, the personalities existing within a person with DID are in fact different facets of the same identity. In addition, DID is considered a dissociative disorder, not a personality disorder. Autism spectrum disorder (ASD) is a developmental disorder characterized by social and communication challenges as well as repetitive behaviors. It is not a personality disorder. Depression is a mood disorder, not a personality disorder. However, people with personality disorders may experience symptoms of depression from time to time. About the author The editorial team at therapist.com works with the world’s leading clinical experts to bring you accessible, insightful information about mental health topics and trends.
As the Sun reaches its peak solar activity, it is scary to imagine the destruction a powerful solar storm can do to the Earth. Notably, it can even destroy communication systems by shifting satellites out of their orbit. The first half of 2022 was packed with solar storms that struck the Earth. Some of them even reached the intensity of a G3 class solar storm, which can disrupt GPS systems and cause radio blackouts. But with the Sun moving towards its solar maximum, the peak of its solar cycle, scientists are concerned whether a much stronger storm is in the making. Historically speaking, the Earth has periodically been hit by G5 class solar storms, the strongest known to us. However, it has been some time since such powerful storms came our way. The last recorded G5 solar storm was in 1859 in a shocking incident which is known as the Carrington Event. Now, more than 160 years later, the Earth is overdue for another big solar catastrophe. The Carrington Event is a historic landmark in solar studies because the incoming solar storm did an unprecedented level of damage which was not expected earlier. Telegraph systems, which used to be the primary method for long distance communication, entirely failed, with various parts of the world reporting sparks and damage to the instruments. Power grids also failed resulting in hours and days without electricity. But if a similar storm were to take place today, the damage could be exponentially higher. How satellites increase the risk from a powerful solar storm Compared to 1859, we have advanced by leaps and bounds in technology and we rely heavily on wireless satellite communication in the form of the internet, mobile networks, navigation systems, radar technology and so on. And satellites are our point of vulnerability in case a solar storm hits. A few months back, Elon Musk led SpaceX lost 40 of its Starlink satellites due to a solar storm. That was a minor G2 class solar storm. It is believed that a G5 class solar storm can be so powerful that it can push away even the largest satellites orbiting around the Earth. While scientists do add insulators to the satellites to protect them from major damage from solar storms, being pushed is not something satellites can prevent. Being pushed away may sound like a small issue, but these satellites have been placed in their orbits after careful consideration and the shift can cause the transmission to get distorted or even fail. That means, a powerful solar storm can put a stop to internet connectivity, mobile networks and navigation systems. If that happens, most of our emergency services, transport and communication systems will see disruptions and spell a disaster on Earth. Coupling that with power grid failures can truly take us back to the dark ages. At present, we do not have anything to protect us from such contingency. However, this is why scientists are focusing on building better prediction models for the Sun, as with enough time, the satellites can be moved to the night-side of the Earth to protect it from any severe damage.
While Loop C Learn C - C tutorial - While loop c - C examples - C programs While Loop C - Definition and Usage - In C- Programming the while statement is considered as a looping statement. - In C , the while Loop is executed, only when condition is true. Sample - C Code C Code - Explanation : - Here in this statement we declare and initialize the value of “i=1”. - In this statement we are defining the while statement and the condition for the execution of “i” value which is less than are equal to “5”, where it executes the statement. - In this statement we print the value of “i”. - Here in this statement the “i” value is incremented by one for each loop execution. Sample Output - Programming Examples - Here in this output the “i” value is incremented by one for each while loop execution which will be executed until the “i” value is less than are equal to “5”.
Being prepared for life as a global citizen is becoming increasingly important for our children. At Brixworth, our modern foreign languages provision aims to start to equip our children with the skills they will need to live and work in an international arena, as well as celebrate the joy that being a global citizen brings with it. Our languages curriculum focuses on developing the skills needed to build capacity for learning languages in general, through the medium of French. Children learn about phonics, grammar and general communication skills, such as intonation, gesture and body language. Our teaching focuses on language learning skills and knowledge about language, teaching our children to be language “detectives” and better learners. We celebrate the languages some of our children already know, whether through home languages, experiences of languages earlier in their education or through extra-curricular activities and help them to see themselves not just as French speakers, but as linguists. We also not only learn about differences between cultures but similarities too and most importantly, that learning languages is great fun! The following resources are to support your child in practising what they have learned in class and in enjoying authentic French materials. Each PowerPoint includes new language content for the lesson and the French can be accessed by clicking on the pictures. What can I do to support my child? - Practise the finger rhymes by copying the actions in the pictures and joining in with the French. - Rehearse new language learned by listening and repeating. - Rehearse new language by clicking on the French and seeing if your child can respond with an action, where appropriate. - As a next step, encourage your child to predict the French that they will hear when you click on the picture. - Sing along to the karaoke versions of the songs together and all join in as a family!
Jinnah, Muhammad ?Ali (1876–1948) JINNAH, MUHAMMAD ˓ALI (1876–1948) Muhammad ˓Ali Jinnah was born on 25 December 1876 in Karachi and became one of the most celebrated leaders of the independence movement. Later he became the founder of Pakistan. He died one year after independence on 11 September 1948. People of Pakistan know him better by his title, Quaid-i Azam, meaning "the great leader." After earning his degree in law from London's famous Lincoln's Inn in 1896 and with a certificate to join the bar of any court in British India, he returned to his homeland. He settled in Bombay where he practiced law and soon rose to fame as the most distinguished attorney in the country. He split his time between the legal profession and politics. As a liberal nationalist trained in British constitutional and democratic tradition, he became a passionate advocate of Hindu-Muslim unity against British rule. For almost two decades, he devoted his energies to bringing the two communities together on one political platform by focusing on the idea of common political interests against British imperialism. By the early 1920s, he began to feel disenchanted by the leaders of the Indian National Congress Party. He did not feel comfortable with their militant, confrontational style with the British. Rather, he advocated the course of moderation and dialogue to win freedom. His real disappointment came on the issue of minority rights, specifically those of the Muslims who comprised nearly 20 percent of the population, with concentration in the eastern and western parts of the British Indian Empire. Given their numbers, they were not a minority in a traditional sense, but a people with a heritage of more than one thousand years of Muslim rule and separate sense of identity. Jinnah favored a tripartite understanding on the constitutional guarantees for the rights of the Muslims once India became independent. Muslim nationalism developed parallel to secular Indian nationalism in the later part of the nineteenth century. Muslims in the Indian subcontinent regarded themselves as a separate community with distinctive culture and civilization. But their political separatism was confined to the issue of minority rights that Muslim leaders like Jinnah strongly advocated in seeking representation in elected councils through separate electorates for Muslims. That ensured that Muslims would get adequate representation according to the size of their population. The dominant Hindu groups, including the Congress Party, were opposed to continuing any such arrangements once the British left. By the late 1930s, Jinnah began to argue for a separate country for the Muslims in the eastern and western fringes of British India. With the passage of the Lahore Resolution in 1940 by a great assembly of Muslim leaders from all over India, Jinnah formally demanded the creation of a Muslim homeland. For the next seven years, he mobilized the Muslim masses on the basis of separate nationhood and convinced the British that that was the only option to prevent a communal war between Hindus and Muslims. Although Jinnah invoked Islamic symbols for political mobilization, he was a liberal, constitutionalist politician with a rational and progressive outlook. See alsoPakistan, Islamic Republic of . Rasul Bakhsh Rais
Acknowledging that numerous bird species have expanded their ranges to higher latitudes and altitudes in response to the warming climate, researchers in Finland studied changes in bird populations and distributions over five decades. Habitat loss and fragmentation can hinder such range adjustments. The research shows that protected areas with high-quality habitats slowed the northern retreat of some species, and provided suitable new habitat for species expanding their ranges northward. Understanding the ecological and biogeographical mechanisms underpinning species range shifts is fundamental for designing effective conservation strategies and adaptations to climate change. The study looked at changes in abundance and distribution of 30 northern and 70 southern bird species inside and outside of conservation areas. Finnish conservation areas are mainly old-growth forest and peat lands, providing excellent habitat for many species. These areas are safe havens for northern birds, accommodating species whose abundance remains high compared to regions outside the conservation areas. Protected habitats also help certain southern species whose range is expanding northward into areas new to them. The findings, published November 4, 2018, in Global Change Biology, note that protected areas serve not only as valuable habitat, but as carbon repositories, another important role in mitigating climate change. Read the entire report here: https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.14461 About the Author BirdWire is the free, twice-monthly e-newsletter from Bird Watcher’s Digest. We compile wild bird and birding-related news releases here on Out There With the Birds as they come in, and share a few of the most interesting and important with BirdWire subscribers on the first Saturday of each month. On the third Saturday of each month, BirdWire offers a bird-related quiz! Click here to subscribe to BirdWire »
In a Montessori class the child’s performance is not evaluated by conducting tests or examinations. Instead the child’s effort and work is respected as it is. The teacher, through extensive observation and record-keeping, plans individual projects to enable each child to learn what he needs in order to improve. Therefore, observation play an important part in the Montessori Method. Since the basis of the Montessori approach is based on the observation that children learn most effectively through direct experience and the process of Investigation and discovery, days are not divided into fixed time periods for each subjects. Instead, the trained adult offers Presentations of the materials either individually or in small groups. The children are then free to work with these materials as long and as often as necessary. The classrooms are filled with hands-on materials. Montessori believed that knowledge proceeds from the hand to the brain. Concrete materials make concepts real, and therefore easily internalized. The student works abstractly (paper and pencil) when he or she has internalized the pattern and no longer needs materials
Deafness can be defined as the inability to comprehend speech and language due to the loss of the sense of hearing. It has different ranges. The first one is mild, where a patient may find it hard to understand what is said to him/her especially if there is a lot of noise around or if someone is talking to the deaf; such as someone telling the deaf person about mygiftcardsite or something related to it. Moderate, whereas a hearing aid is required to hear clearly. Third is severe, which forces someone to communicate through lip-reading or sign language even with the use of a hearing aid. And lastly is profound, wherein individuals who are at this level could no longer hear a sound although it is highly amplified. It is an impairment that may have started since birth – this is called congenital deafness. On the other hand, deafness that occurs at a later time in a person’s life is termed as adventitious deafness. And there are many causes of deafness. - Congenital Deafness - Prenatal exposure to disease: babies that are exposed to certain diseases in the womb will affect its hearing. These diseases include rubella / German measles, influenza and mumps. Another factor is exposure to drugs such as quinine or to methyl mercury. - Hereditary: deaf parents can pass it on to their children. - Genetic disorders: some of the many genetic disorders that can cause deafness are osteogenesis imperfecta, Trisomy 13 S and multiple lentigines syndrome. - Adventitious Deafness - Noise: this is the most common cause of adventitious deafness and it is due to this that over one quarter of people are affected by hearing loss. Acoustic trauma is when hearing mechanisms within the inner part of the ear is damaged and this is caused by loud noises such as explosion, firing a gun near the ear, and also prolonged exposure to loud music in a concert or through the headphones. - Drugs: some drugs can affect hearing by destroying the nerves. Some of these are antibiotics, ethacrynic acid and drugs used for treating cancer. - Diseases / illnesses: certain diseases have been known to cause deafness – meningitis, mumps, cytomegalovirus, chicken pox, severe case of jaundice, sickle cell disease, Lyme disease, diabetes, arthritis, hypothyroidism and even syphilis. - Trauma: damaged or pierced eardrum by any object, fractured skull or changes in air pressure. Congenital hearing loss – https://en.wikipedia.org/wiki/Congenital_hearing_loss
Here's one reason why space missions are so expensive: For every pound of payload launched into space, you've got to launch another 99 pounds, mostly in the form of fuel. As a result, it can cost $10,000 to put a pound of payload into Earth orbit. That's a big problem for a long-term space missions to Mars or beyond, which would require bringing along incredible amount of fuel and supplies. No surprise, then, that the hot topic among many space exploration experts is to make as much use as possible of the materials available at your destination so explorers don't have to bring everything with them. According to a new study investigating this idea, synthetic microorganisms could help make this a reality. Scientists reasoned that synthetic biology might help missions save costs by using these organisms to recycle waste and harvest useful materials at the destination, reducing the supplies that astronauts have to bring with them. The researchers investigated the potential impact of what they call "space synthetic biology" on a hypothetical six-person, 916-day round trip to Mars, involving 210 days of travel each way and a 496-day stay on a Martian surface habitat. One key product that synthetic organisms could manufacture is fuel. "Fuel will be about two-thirds of the mass on an Earth-to-Mars-to-Earth mission," says study lead author Amor Menezes, a systems engineer at the University of California, Berkeley. Menezes suggests the microbe Methanobacterium thermoautotrophicum could generate high-quality methane and oxygen fuel, reducing the mass of the manufacturing plant needed to fuel the return trip from Mars by 56 percent. The carbon dioxide that astronauts breathe out could also be used to manufacture additional fuel, perhaps for use in jetpacks. Food is another target—for example, crew meals constituted nearly two-thirds of the payload of a recent supply mission to the International Space Station, the researchers say. Menezes and his colleagues suggest that using nutritionally rich food made from bacteria known as Spirulina, the amount of food for a Mars could be reduced by 38 percent. The researchers do note that astronauts would likely tire of Spirulina food after a while, and suggested synthetic biology could also enhance and diversify the flavors and textures of this food—and maybe even help improve astronaut health. Synthetic organisms could manufacture helpful materials, too. The researchers suggest bringing along the bacteria Cupriavidus necator, which could synthesize the biopolymer polyhydroxybutyrate (PHB). Space explorers could then use this material to 3D-print structures and devices they need rather than bringing them along. Using this approach, the scientists say, space colonists could build a 4,200 cubic-feet structure and bring along 85 percent less material. Space explorers could even use bacterial to help them make new medicines. That's helpful because pharmaceuticals astronauts bring on missions can expire go bad in space—radiation can lead 73 percent of solid drugs to expire after 880 days. By using genetically engineered microbes to manufacture the painkiller acetaminophen, astronauts could completely replenish their stocks of that drug within a few days. "Biology gives you the potential for a closed, self-renewing system," says synthetic biologist Jeffrey Way at Harvard University, who did not take part in this study but has done similar research. "All you need is solar energy and everything else could be recycled."
The x -coordinate of vector → A is called its x – component and the y -coordinate of vector → A is called its y – component. The vector x – component is a vector denoted by → A x A → x. The vector y – component is a vector denoted by → A y A → y. The x – component for vector b is -3 while the x – component of vector c is +3. -The y – component of vector A is equal to the y – component of vector B. Vector A does not have any component along the y -axis and vector B does not have any component along the x -axis. -A vector can have positive or negative magnitudes. -The magnitude of a vector cannot be zero unless all of its components are zero. We know that = xi + yj. The vector, being the sum of the vectors and, is therefore. This formula, which expresses in terms of i, j, k, x, y and z, is called the Cartesian representation of the vector in three dimensions. We call x, y and z the components of. along the OX, OY and OZ axes respectively. Any vector directed in two dimensions can be thought of as having an influence in two different directions. That is, it can be thought of as having two parts. Each part of a two-dimensional vector is known as a component. The components of a vector depict the influence of that vector in a given direction. Since a scalene triangle exists, three unequal vectors can add up to zero. The conditions for three vectors to form a triangle are: The sum of magnitudes of any two of them must be greater than the magnitude of third. magnitude of sum of two vectors must be equal to the magnitude of third. The unit vector i has a magnitude of 1 and its direction is along the positive x-axis of the rectangular coordinate system. The unit vector j has a magnitude of 1 and its direction is along the positive y-axis of the rectangular coordinate system. The z component of the vector is. A component such as vx is not a vector, since it is only one number. It is important to note that the x component of a vector specifies the difference between the x coordinate of the tail of the vector and the x coordinate of the tip of the vector. Can two vectors of unequal magnitude? Yes, two vectors of equal magnitude that are pointing in opposite directions will sum to zero. Two vectors of unequal magnitude can never sum to zero. If they point along the same line, since their magnitudes are different, the sum will not be zero. Projectile Motion. Thus, the x component of the velocity remains constant at its initial value or v x = v x, and the x component of the acceleration is a x = 0 m/s2. The magnitude of a vector is the length of the vector. The magnitude of the vector a is denoted as ∥a∥. See the introduction to vectors for more about the magnitude of a vector. For a two-dimensional vector a=(a1,a2), the formula for its magnitude is ∥a∥=√a21+a22. It is scalar quantity. It is a vector quantity. It is always positive. It can be positive, negative or zero. These parts of the force are called the components of the force. The component that pushes right or left is called the x – component, and the part that pushes up or down is called the y- component. Mathematically, the components act like shadows of the force vector on the coordinate axes.
Origami is the art of folding paper. The name comes from Japanese, adopted into English in the 1960s. Before 1900, there were several isolated folding traditions which seem to have developed entirely separately: In Europe, the first folding traditions were not with paper but with cloth. Napkin folding traditions appeared in the 16th century, and were very well documented, though aside from some simpler forms they were discontinued during the 1800s, due to the time and effort involved in folding cloth. Paper folding emerged as a form of chidren's recreation and a pedagogical tool, particularly during the 19th century and later. European folding generally uses 45º angles, and used mostly squares or rectangles. Chinese folding traditions are for the most part associated with the practice of burning paper objects at a funeral, as a way of providing for the deceased in the afterworld. As a result, there is an emphasis on inanimate objects; gold ingots, or yuenbao, are a common subject. Other uses of folding included auspicious animals such as the frog or turtle. Traditional Japanese origami had its beginnings in the formal gift-wrapping practices of 17th century Japan, and by the mid-18th century it was an established and recognizable tradition. Several origami books were published in Japan during the early 19th century. Contrary to modern practice, it often used cuts, and implemented a variety of starting shapes: squares, rectangles, octagons, hexagons. Japanese traditional repertoire uses 22.5º angles extensively, and used more sculptural shaping than other traditions. The pedagogical methods of Friedrich Froebel, developed in the 1840s and 50s, included paper folding to teach geometry and encourage imaginative play through interpretation of folded objects. During Japan's industrialization process, parts of the German education system, including Froebel's kindergarten method, were adapted for use in Japanese schools. The folding styles were assimilated into Japanese origami, and laid the ground for future developments. The restrictive use of squares with color on one side is credited to Froebel's influence. During the early 1900s, several Japanese folders started creating and publishing their own origami designs, notably Akira Yoshizawa, Kosho Uchiyama, and Toshie Takahama. Akira Yoshizawa's innovations in particular sparked an era of expanded exploration, and many origami artists credit him as the founder of modern origami. He was the first to create origami as sculptural art, and introduced a number of major technical innovations: backcoating, where two sheets are pasted together to combine the properties of both; wetfolding, where a sheet of thick paper is dampened before folding to loosen the fibers, allowing for better manipulation of the paper; a method of diagramming, which was expanded upon by Samuel Randlett, and now referred to as the Yoshizawa-Randlett diagramming system; partial folds and creases, used for sculptural effect; and in general a greater level of complexity and liveliness. (more to come: origami after world war II in America and Europe, technical origami movement, modular and tessellation genres, etc.) - Paper Fundamentals - Joan Sallas, "Gefaltete Schönheit"
Who were the First Americans? It’s a question that for decades has divided researchers, who have proposed competing theories as to how humans moved from Eurasia into North America. The question is far from settled, though it is clear that by about 14,500 years ago (and perhaps as far back as 30,000 years ago) humans had moved from Siberia to present-day Alaska and begun to spread throughout the Americas. Now, two new studies published simultaneously in Nature are giving some more insight into who those First Americans might have been, in addition to information about later waves of migration that contributed to the Native Americans and the genetically and culturally distinct Inuit still in the region today. Two teams of researchers, one led by Pavel Flegontov, from the University of Ostrava in the Czech Republic, and the other by University of Copenhagen geneticist Martin Sikora, used information from ancient genomes to piece together some of the broad population movements that brought successive waves of humans to North America. They found the process was more complex than previously thought, involving multiple waves of migration and interactions between different groups at various stages. But the work is revealing some of the nuanced history of present-day native populations in the Americas, information that helps tie their lineages to the grander epic of human migration across the globe. Into the Arctic The more ancient part of the story begins with the Sikora paper, which compiled data from 34 ancient genomes across Siberia and northern China ranging from 31,600 to 600 years ago. Analyzing and comparing these genomes to each other, the researchers identify an initial population of humans called the Ancient North Siberians that moved into north-eastern Siberia about 38,000 years ago. Then, around 20,000 years ago, a group of East Asians moved north and intermixed with these people to create a new lineage that would begin moving east into the then-dry Bering Land Bridge connecting Siberia to the Americas. This is the group of people that would give rise to the First Americans. The authors estimate the First Americans split off from other Siberian populations around 24,000 years ago, which is consistent with some of the earlier predictions for when the Americas were first settled. Not everyone was destined to be a First American, though. Some of those ancient Beringians stuck around near the land bridge, becoming what the authors call the Ancient Paleo-Siberians. A portion of the Ancient Paleo-Siberian population would later move back into Siberia and mix with groups there, while others would stick around and become part of later migrations into North America. Those that moved onwards to North America are called the Paleo-Eskimos. The First Inuit Flegontov and his team pick up the story of those ancient Eskimos around 5,000 years ago, using a different group of 48 genomes from ancient individuals across the American Arctic and Siberia spanning from around 7,000 years ago to just a few hundred. Five thousand years ago, the Paleo-Eskimos moved into North America, interbreeding with people already there who had descended from the First Americans. Traces of this migration show up today in the genomes of Inuit people from the region. But there was another, even more recent migration to the region that’s evident in the genomes of modern Inuit as well. Around 800 years ago, yet another wave of Siberians crossed from Beringia to Alaska and integrated into the Inuit populations there. The present-day Inuit, then, are a mosaic of at least three migrations from Siberia to the Americas, each successive wave intermingling with the peoples already there. The main takeaway from these studies, and others looking at the history of human migration, is that the process of human expansion is gradual and far from linear. Humans crossed from Siberia to the Americas on multiple occasions, and each wave of new arrivals was integrated into the people already living there. There’s also evidence that people crossed back from the Americas and Beringia to Siberia — a reverse migration that belies the notion of constant human expansion. Multiple waves of humanity built the native populations that live in the far north today. Their genetic history is rich and varied, but large parts of it are still unknown. There is likely further work to be done to understand the cultural diversity of these ancient people, from the most ancient Siberians through to the more recent Paleo-Eskimo populations. Tying evidence from archaeological digs, including bones and cultural artifacts, to the genetic data will begin to add that depth. Perhaps soon we’ll begin to know just who these peoples were, who braved the Arctic cold as they set out across unknown frontiers.
The North Pole did not become a goal of ARCTIC EXPLORATION until fairly late; the few early expeditions that tried to reach it were looking for a polar route to the East rather than for the pole itself. W.E. The North Pole is the Earth's northernmost geographic point, located at the northern end of the Earth's axis. The pole lies in the Arctic Ocean more than 720 km north of Ellesmere Island at a point where the Arctic Ocean is 4087 m deep and usually covered with drifting pack ice. The pole experiences 6 months of complete sunlight and 6 months of night each year; from it, all directions are south. Because the Earth's surface areas near the North and South poles receive the sun's rays at the most slanted angle, they absorb the least heat. Centrifugal force causes the Earth to bulge outwards at the equator; hence, it is slightly flattened at the poles. However, during the International Geophysical Year (1957-58) it was found that the Earth is very slightly pear shaped, with the North Pole at the smaller end. This bulge (about 15 m high) covers millions of square kilometres around the pole. The North Pole did not become a goal of Arctic Exploration until fairly late; the few early expeditions that tried to reach it were looking for a polar route to the East rather than for the pole itself. W.E. Parry left Spitsbergen to try to reach the pole in 1827 and attained 82°45'; further expeditions, American and British, took place in the 1860s and 1870s. It is widely accepted today that the pole was first reached by the American explorer Robert E. Peary, who started from Ellesmere Island on 1 March 1909. With Peary on his final dash were his dog driver Matthew Henson and 4 Inuit. It is claimed that they arrived at the pole on April 6 and remained there 30 hours. A competing claim was made by F.A. Cook, a former traveller with Peary, who said he had reached the pole on 21 April 1908 and had remained there 2 days. The controversy still continues, but Peary's claim seems the more valid and has been accepted by the US Congress and geographical institutions in many countries. In 1926 Richard E. Byrd and Floyd Bennett made the first airplane flight over the pole; in the same year, it was reached by dirigible by the international team of Roald AMUNDSEN, Lincoln Ellsworth and Umberto Nobile. The pole was visited by the US nuclear submarine Nautilus in 1958. Since 1907 various Canadians have invoked what is known as the "sector principle" as a possible legal basis to a claim for sovereignty in the polar region. By this claim Canada would have jurisdiction over a wedge-shaped segment between the line of longitude 60° west of Greenwich (north from a point on the meridian that is near Ellesmere Island) and the meridian 141° west of Greenwich (forming the border between the Yukon Territories and Alaska); these meridians converge (as do all meridians of the Northern Hemisphere) at the North Pole. The theory has not received general acceptance as a legal basis for a claim. The North Pole is also the mythical home of Santa Claus. As a public service, Canada Post and its unions provide mail service to Santa at the North Pole, Canada, HOH OHO. See also Magnetic Poles. A. Cooke and C. Holland, The Exploration of Northern Canada, 500 to 1920 (1978); M. Zaslow, ed, A Century of Canada's Arctic Islands (1981).
Each spring, huge patches of phytoplankton bloom in the oceans, turning cold, blue waters into teeming green pools of microbial life. This ocean “greening,” which can be seen from space, mirrors the springtime thaw on land. But while spring arrives gradually on land, with a few blades here and some buds there, the oceans bloom seemingly overnight. “If you go and look in the ocean and try to sample in deep winter, there’s little phytoplankton,” says Raffaele Ferrari, the Breene M. Kerr Professor of Oceanography in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It’s like going into a desert. And then all of a sudden you have this bloom explosion, and it’s like a jungle. There is an ongoing debate as to what triggers the bloom onset.” Ferrari and John Taylor, a former postdoc at MIT and now a lecturer in oceanography at the University of Cambridge in the U.K., have identified where blooms are most likely to start. The team found that phytoplankton grow up along ocean fronts, at the boundaries between cold and warm currents. This explains why the ocean does not turn green everywhere at once, but rather develops green streaks that track fronts. Through numerical simulations, Ferrari and Taylor found that at these fronts, warm water slides over cold, denser water, creating a hospitable environment for microorganisms. The findings, published online last week in Geophysical Research Letters, may help scientists predict where blooms will spring up. Knowing how and where blooms occur may help scientists gauge an ocean’s productivity from year to year. The tiny microorganisms collectively known as phytoplankton are the foundation of the marine food web and account for half of the world’s photosynthetic activity, consuming carbon dioxide and sunlight to produce energy. Eric D’Asaro, a professor of oceanography at the University of Washington, says predicting phytoplankton blooms may help determine the amount of carbon dioxide taken up and stored by the oceans. “Phytoplankton … take carbon dioxide out of the atmosphere, including the extra carbon dioxide that we have put there,” says D’Asaro, who was not involved in the research. “A fraction of this organic carbon then sinks to deeper depths in the ocean, thereby removing it from the atmosphere and reducing the amount of greenhouse warming.” Ferrari and Taylor say their findings suggest that ocean fronts are hotspots for phytoplankton growth and may be “crucial players” in the global carbon cycle. Seeing the light Since phytoplankton depend on sunlight to grow, they need to stay within 10 to 100 meters of the surface, in the euphotic layer where sunlight can easily penetrate. However, in winter, intense cooling by atmospheric storms causes the surface waters to increase in density and sink. The sinking waters suck organisms down deep into the ocean, away from sustaining light. Within this “mixed layer” churned by cooling, organisms die off and eventually sink into the ocean abyss. In the winter, the ocean’s mixed layer runs deep, creating a “desert” with very little signs of life. “Life on earth depends on light,” Ferrari says, “and phytoplankton do not see much light in winter.” Ferrari and Taylor recently published a paper in Limnology and Oceanography where they identified a physical explanation for the onset of biological blooms. The team found that in late winter, when harsh atmospheric cooling gives way to springtime warming, mixing in the ocean subsides. Using numerical simulations, the researchers showed that decreased cooling turns the ocean’s mixed layer into a quiet environment, the top of which has sufficient light to host microbial growth. The team then proposed that phytoplankton blooms start at fronts, because fronts substantially reduce mixing in the upper ocean. Hence, they reasoned, phytoplankton find hospitable conditions at fronts — even in winter, when cooling has not yet subsided. They reasoned that the overall warming of the ocean in spring encourages phytoplankton to grow beyond a front’s boundaries, into large, sprawling blooms. D’Asaro says the team’s findings present a compelling mechanism for ocean blooms. However, he adds that to fully understand the causes of ocean greening, one has to consider biology along with physics. “In particular, [blooms depend on] the presence or absence of planktonic animals that could rapidly eat the phytoplankton as they grow,” D’Asaro says. “It’s like a meadow — the grass will not grow tall in the spring if there are a lot of cows in the meadow.” Ferrari plans to test the theory next year off the coast of Ireland. He hopes to deploy gliders, autonomous vehicles that will go up and down a water column for a year, monitoring temperature, salinity, chlorophyll and light penetration. He also plans to deploy a meteorological buoy to track surface fluxes of heat and winds. “We’re going to be able to predict according to this argument where the blooms occur, and the gliders will tell us whether our prediction is right,” Ferrari says. “We think it’s a pretty general principle that must hold.”
The American beech (Fagus grandifolia) lends its striking height and massive spread -- each measuring about 50 to 60 feet at maturity -- to landscapes in U.S. Department of Agriculture plant hardiness zones 3 through 9. In addition to its smooth, light gray bark, beech trees -- which often serve as shade trees -- are known for their distinctive, angular seeds, commonly known as beech nuts. The seeds of the American beech reside in a hard, light-brown, spiny bur known as an involucre. Each of these casings contains two to four seeds, each of which feature three sides and an angular shape. American beech tree seeds measure about 1/2 inch to 1 inch in length and are brown in color with a smooth, shiny texture. Although the American beech produces its seeds over the course of a single year, this tree tends to produces its largest crops of viable seed every two to three years, shedding its seeds in the late summer and early fall seasons. These seeds must be collected within two weeks if you wish to use them for planting. American beech seeds have only a short period of viability and should be sown upon ripening in the fall. However, you may stratify seeds, ideally at 41 degrees Fahrenheit for 90 days, for spring planting. Beech trees don't start producing large numbers of seeds until about 40 years into their life, and they commonly produce the most seeds after a moderate summer season. Birds and other mammals such as squirrels, foxes, porcupines, raccoons, pheasants, black bears and wild turkeys often eat the seeds of the beech tree, as do people. Beech seeds, which feature a nutty flavor that sometimes serves as a coffee substitute, contain up to 22 percent protein. According to Plants for a Future, raw seeds should not be consumed in large quantities as they are slightly toxic and may lead to enteritis. Typically, the American beech is planted from nursery-bought seedlings rather than seeds when used in the home landscape. In lawns and gardens, the shed seeds of the beech tree do not pose a significant litter problem. In addition to their edible uses, the oil from beech seeds serves as fuel for lamps. - University of Florida Environmental Horticulture Institute of Food and Agricultural Sciences: Fagus Grandifolia, American Beech - Arbor Day Foundation: American Beech, Fagus Grandifolia - Palomar College: Botany 115 Terminology: Fruit Terminology Part 2 - Plants for a Future: Fagus Grandifolia - The University of Texas at Austin Lady Bird Johnson Wildflower Center: Fagus Grandifolia - The Encyclopedia of Fruits and Nuts: Jules Janick, R.E. Paull - Comstock/Comstock/Getty Images
OPAL surveys are a fantastic way to take teaching outdoors, learn new skills and contribute to real scientific research. Take part in OPAL surveys - Find simplified survey materials for young children in our Resources section. - Download easy-to-print Bugs Count recording sheets on the Bugs Count Survey page. |Tree health survey – Ecologist Mike Dilger introduces our guide to examining trees in your garden, school or park.||OPAL Learning Lab – find out how you can get involved in OPAL surveys. Play in our Learning Lab to find out more.| Download easy-to-print recording sheets for schools (black and white) Popular teacher downloads Crest – OPAL activities CREST is a science award scheme for schools and colleges run by the British Science Association. Earn your awards by completing science projects and use OPAL resources to help you. Get guidance on adapting OPAL surveys for your lessons Our free lesson plans and curriculum guides take you step-by-step through teaching OPAL surveys to your pupils: From animal tracking to tree maths, learn more about the environment with these curriculum-linked activities. Explore the great outdoors in every season, and take part in follow-up activities when you get back to the classroom: Discover the world through a series of challenges, and earn points as you go. Can you top the leaderboard? Look out for the OPAL missions to earn your Nature Explorer reward: - Natural History Museum – Education Activity ideas, online resources, and museum events for schools. - Met Office – Learning Curriculum-linked activities to support the teaching of weather and climate. - TES – Teaching resources A huge database to support all levels and subjects, including science. - School Science – Resources A large selection specifically aimed at teaching science to ages 5-19. - ARKive – Education Teaching resources for biology subjects such as adaptation, natural selection and classification.
"Slide in Parque" by Fotoblog Rare from VLC, SP - Flickr. Licensed under CC BY 2.0 via Wikimedia Commons. Playground slides are found in parks, schools, playgrounds and backyards. The slide may be flat, or half cylindrical or tubular to prevent falls. Slides are usually constructed of plastic or metal and they have a smooth surface that is either straight or wavy. The user, typically a child, climbs to the top of the slide via a ladder or stairs and sits down on the top of the slide and "slides" down the slide. In Australia the playground slide is known as a slide, slippery slide or slippery dip depending on the region. Sliding pond or sliding pon is a term used in the New York City area to denote a playground slide. The slide was invented by Charles Wicksteed, and the first slide, made of planks of wood, was installed in Wicksteed Park in 1922. The discovery of Wicksteed's oldest slide was announced by the company in 2013. A playground slide may be wrapped around a central pole to form a descending spiral forming a simple helter skelter. Playground slides are associated with several types of injury. The most obvious is that when a slide is not enclosed and is elevated above the playground surface, then users may fall off and incur bumps, bruises, sprains, broken bones, or traumatic head injuries. Some materials, such as metal, may become very hot during warm, sunny weather. Some efforts to keep children safe on slides may do more harm than good. Rather than letting young children play on slides by themselves, some parents seat the children on the adult's lap and go down the slide together. If the child's shoe catches on the edge of the slide, however, this arrangement frequently results in the child's leg being broken. If the child had been permitted to use the slide independently, then this injury would not happen, because when the shoe caught, the child would have stopped sliding rather than being propelled down the slide by the adult's weight.
Antarctica in 1993Article Free Pass Some 4,000 scientists and other personnel from two dozen nations continued to do research aimed at understanding the Antarctic and its involvement in global environmental change. They and some 6,500 tourists and adventurers were the only human visitors to the region, which comprises 9% of the Earth’s land area and 8% of its oceans. The 40 Antarctic Treaty nations met in Italy in November 1992--the latest of numerous consultative meetings held since the treaty entered into force in 1961. Delegates adopted recommendations about strengthening plans for specially protected areas, increasing Antarctic global change research, and increasing environmental monitoring and international data management. By October 1993 most of the treaty adherents, including all 26 consultative parties, had signed a comprehensive Protocol on Environmental Protection, drafted in Madrid in 1991. One nation, Spain, had ratified the protocol, but several nations were not expected to ratify until 1994. The U.S. Senate approved ratification in October 1992, and implementing legislation was still to be adopted. The protocol strengthened environmental protection measures and banned mining in Antarctica. A U.S. court decision in January applied the National Environmental Policy Act (NEPA) to federal activities in Antarctica. NEPA had earlier applied only domestically, while Executive Order 12114 covered the environmental aspects of U.S. activities overseas. The Department of Justice decided "not to challenge the court’s precise holding" but said that "the Administration does not embrace language in the opinion which may be interpreted to extend beyond this"; overseas federal activities in places other than Antarctica were still considered covered by the executive order, not NEPA. Specialists from Argentina and The Netherlands removed the remaining fuel and lubricants from the wrecked Argentine ship Bahía Paraíso. The ship had struck a rock in January 1989 and sunk a kilometre and a half from Palmer Station, a U.S. research facility, resulting in Antarctica’s largest oil spill and causing considerable animal and plant mortality. The complex oil-removal project, which involved, among other operations, 167 dives, extracted 148,390 litres (39,200 gal) from the ship’s tanks and engines. The hulk, no longer considered a significant environmental threat, was expected to be left where it was. The copious biota that live and breed around Palmer Station and the wreck site had been studied intensively over the past quarter century, and the U.S. National Science Foundation in 1992 declared the area a long-term ecological research site, one of only 18 worldwide. Scientists who examined the area two years after the wreck found some effects remaining from the initial spillage of fuel, but said the volatility of the fluid, the amount spilled (640,000 litres--170,000 gal), and the dynamic weather and current conditions tended to minimize long-term contamination. Among the most widely reported scientific findings from Antarctica was the current status of the ozone hole. In October 1993 several research stations in Antarctica reported the lowest stratospheric ozone levels ever measured anywhere above Earth. Chlorine from chlorofluorocarbons (CFCs), man-made compounds, was considered the major cause of stratospheric ozone depletion, although a laboratory experiment in 1993 indicated that bromine (also from industrial sources) might be responsible for up to 30% of the Antarctic ozone loss. A natural cause of the ozone hole--chlorine from volcanoes, particularly Mt. Erebus in Antarctica--had been suggested, but most scientists denied that volcanic chlorine could be a cause, because it combines with other elements in the lower atmosphere. Sulfur dioxide injected into the stratosphere by the 1991 eruption of Mt. Pinatubo in the Philippines, however, may have increased the chemical effectiveness in destroying ozone of chlorine and bromine already present, reducing ozone levels worldwide. The ozone hole allows harmfully high levels of ultraviolet rays (UV) from the Sun to reach the Earth’s surface. Ocean biologists working in Antarctica estimated that the increased UV reduces the productivity of marine phytoplankton in the marginal ice zone by about seven million tons of carbon a year, or about 2% of the total. Phytoplankton are tiny plants at the base of the Antarctic Ocean food chain. Scientists did not yet know if the populations of krill and other Antarctic sea life had been affected by the reduced phytoplankton. U.S. researchers at the geographic South Pole announced in June the discovery of evidence of cosmic structures that formed just one million years after the universe began. Using two specially designed radio telescopes and taking advantage of the extremely dry and cold--and therefore clear--air over the Antarctic interior, they detected small temperature fluctuations in microwave radiation left over after the Big Bang. On Vega Island, near the Antarctic Peninsula, Argentine and U.S. paleontologists discovered bird fossils that shed light on how birds were evolving 65 million-70 million years ago. The fossils suggested a creature with the body of a shore bird and the head of a duck. The bird lived at a key time in avian evolution, when primitive birds were being replaced by modern, toothless types. The discovery figured in one of the hottest debates in paleontology: the cause of the mass extinctions at the end of the Cretaceous. "You can’t find this great horizon of death in Antarctica," one geologist said. "The rock record across the Antarctic Cretaceous-Tertiary boundary is among the best in the world--it’s incredibly fossiliferous--but we don’t see an abrupt extinction of life at that time." The bird and other recent fossil finds indicated that the polar regions had a much more important role in evolution than was generally thought. The worldwide search for hard clues to climatic warming produced interesting recent results in and near Antarctica, although most were too localized for extrapolation to the global situation. British scientists reported that South Georgia’s smaller land glaciers had been receding since the 1930s, and its larger valley and tidewater glaciers since the 1970s; the climate in this area had been warming since the 1950s. The Wordie Ice Shelf, on the west coast of the Antarctic Peninsula, had been retreating steadily since the mid-1960s and had had a big breakout in 1988-89; higher mean annual temperatures in the area were the probable cause. New Zealand scientists reported a dramatic increase since 1980 in the number of Adélie penguins in the Ross Sea region, probably a result of a recent warming of the Ross Sea climate. A Russian scientist suggested that monthly changes in the thickness and area of Antarctic sea ice accounted for a possible 3° C (5.4° F) increase in planetary mean air temperature. A U.S.-led team analyzed the works of many investigators to come up with a new estimate of Antarctica’s "mass balance"--the difference between its receipt of freshwater (as snow and ice) and discharge (as iceberg calving and melting); they found a negative mass balance, or net loss, of 469 trillion tons per year. The new estimate departed from earlier calculations that indicated Antarctica was in mass balance. The net discharge might solve the mystery of an unattributed rise in the global sea level of 0.45 mm (0.02 in) per year. Global questions aside, one of Antarctica’s glacial recessions left a poignant postscript to a 1940-41 U.S. expedition that occupied Stonington Island just off the west coast of the Antarctic Peninsula. Then, as reported in the March 1993 National Geographic, a glacier bridged the small strait between the island and the shore, giving the expedition’s Curtis-Wright Condor biplane the only route from the ship to a skiway behind the station. Called East Base and not occupied since 1948, the station had the oldest U.S. structures in Antarctica, and the Antarctic Treaty nations in 1989 declared it a historic site. When crews returned to make a small museum in one of the buildings, the glacial ramp--so critical to the 1940 expedition--was gone, replaced by open water and an ice cliff. This updates the article Antarctica. What made you want to look up Antarctica in 1993?
On direct examination, a lawyer is not allowed ask a witness a leading question because the court wants testimony to come directly from the witness, not from a lawyer through his questions. Likewise, in the classroom the goal is for the student to do the thinking. Non-leading questions leave the field completely open and invite student participation in the conversation. They put the responsibility for thinking clearly in the hands of the students. We think that context determines whether a question is non-leading. Our emphasis is on facilitating student thinking, rather than simply extracting information. When John McKinstry's students bring up the concept of circumference, John follows up with what, in this context, is a non-leading question: "Why is circumference important?" More typically, non-leading questions look like these: "What are you thinking?" "Why are you asking that question?" "Can you explain why you did it this way?" "Why does this work?"
The Human Evolution and Development The Stages in the Human Evolution :Sociological and Anthropological Concept on the Origin of Man The human evolution is usually connected in the development of the different species of primates that able to evolve around 50 million years. Based on anthropological and archeological researches, there were different developments of the descendants of modern man . The human evolution for the modern man evolved 250,000 years ago as the races identified under the Homo Sapien ( Wise Man). We take the stages on the timeline of human evolution may follow in this chronological order : 1) About 40 million years ago. This is the early development of the human species through the evolution of the manlike primates which was known as the Hominid. The common relics of Hominids were “Ramapithecus” (14million years ago), and“Australopithecus ” ( 5 million years ago) found in India and Africa. They had small brain, could walk straight and frighten the enemies or predator with used of stones and sticks. 2) About 1-2 million years ago. In this period, the apelike men believed to be the first manlike creature with small brain, 4-5 feet tall, walk in upright position and able to use stone tools for weapons and protection to their enemies. The two common relics of apelike species were“ Zinjanthropus” (1.75 million years ago) and “ Lake Turkana” (2-3 million years ago) found in Africa. 3) About 500,000 years ago. In the usual development of human species, they were called the manlike creature with direct descendants to the modern man that lived in Asia (particularly in China and Indonesia), Africa and Europe. The physical characteristics were almost the same of the brain with modern man about 5 feet or 5.2 feet. They were able to use weapons for the hunting and protection of the enemies just like other human species. The common human relics in this period just like the “Pithecanthropus Erectus” ( 700,000 years ago) and “Sinanthropus Pekinensis”(500,000–750,000 years). 4) About 250,000 years ago. The human species in this period believed to be the direct descendant of modern man in Europe and Asia. They were considered as the primitive man or prehistoric man lived in the cave and used stone implements in hunting and fishing including agriculture. The common human species in this period were Neanderthal Man (70,000 years ago) and Cro Magnon Man (35,000years ago). These are the stages of human evolution from 40 million years ago to 35,000 years ago: 1. Hominid ( Manlike Primates) - The development of the different species of primates which were able to evolve in 40 million years ago. There have been various relics of hominids which could be described as manlike primates: a) “Ramapithecus”- This hominid believed to have lived 14 million years which the remains could be found Siwalik Hills of India. The description of this hominid could stand upright and used stones and sticks to frighten his enemies. This kind of specie was found by Mrs. Mary Leaky at the volcanic ash of Laetolil, Tanzania, East Africa in 1975. b) “ Lucy ” - The American archeologist, Donald C. Johanson discovered a whole skeleton of a teenage girl at Hadar, Addis Ababa, Ethiopa . c) “ Australopithecus ” – It was believed to live in Africa about 5 million years ago. He had small brain but could walk straight and used simple tools. 2. Homo Habilis ( “Handy Man”) - The apelike men used stone tools as weapons and protection of their enemies. a) “ Zinjanthropus” – The physical description of this specie was about 4 feet and could walked upright with small brain. He used crude stone weapons for protections against predators. This was discovered by Dr. Louis S.B. Leakey ( Husband of Mrs. Mary Leakey) in Olduva Gorge, Tanzania,East Africa in 1999 which believed to live about 1.75 million years ago. b) “ Lake Turkana” ( “1470 Man” ) – This specie was about 5 feet tall and walked upright . He used more refined stone tools with a brain double size of a chimpanzee’s brain.This was excavated in Lake Turkana, Kenya, East Africa by Dr. Richard Leakey ( The son of the famous Dr. and Mrs. Leakey) in 1972 which consisted by a shattered skull and leg bones. 3. Homo Erectus ( “Upright Man”) – It was believed to be the first manlike creature that lived about 500,000 years ago in Asia, Africa and Europe. This manlike specie could walk straight with almost the same brain with modern man. He made refined stone tools for hunting and weapons for the protection of the enemies. (a)“ Pithecanthropus Erectus” ( “Java Man”) – This was discovered by Eugene Dubois at Trinil, Java, Indonesia in 1891 which was then called the “ Java Man” .The physical characteristics of this homo erectus were : about 5 feet tall; could walked erect; heavy and chinless jaw; hairy body of modern man. (b) “Sinanthropus Pekinensis” ( “Peking Man”) – This homo erectus specie was discovered at Choukoutien village, Beijing, China in 1929.This was about 5’ 2” tall, could walk upright, and the brain almost as large as the modern man which was believed to live 500,000 year ago. 4. Homo Sapien ( “Wise Man”) – It was believed that this was the direct descendant of modern man which lived about 250,000 years ago. They had similar physical descriptions with modern man. They originated as the primitive men whose activities were largely dependent on hunting, fishing and agriculture. They buried their dead, used hand tools and had religion. (a) Neanderthal Man – The Neanderthal man were discovered in the cave of Neanderthal valley near Dusseldorf, Germany in 1856. It was believed to appear in the high temperate zone in Europe and Asia about 70,000 years ago. They had physical characteristics as heavily built with powerful jaws, brutish and primitively intelligent. They usually lived in cave and dependent in hunting and fishing .They had religious beliefs and more advanced than the homo erectus. (b) Cro Magnon Man – This was more stronger homo sapien than the neanderthal which was discovered by French archaeologist Louis Lartet in the Cro Magnon Cave at Ley Eyzies in southern France. It was believed to live in Europe, Asia and Africa. Specifically, their remains have been found in western Asia including Italy, Spain, France and Russia and all areas over Africa. They were about 5 feet and 11 inches with more developed brain than their predecessor. As a prehistoric man, they had stone implements, art objects. and consistent hunting skills. The evolutions of the primitive men are laid in three historic periods. 1. Paleolithic Period (Old Stone Age: 3 million years to 8,000 B.C.) The common primitive men identified in this period were the homo erectus such as Java Man and Peking Man; the homo sapiens such as the Neanderthal Man and Cro Magnon Man. The characteristics of this period were: a) The rough stone tools were used as main weapons and tools such as chisels, knifes, spear and others . b) They lived in hunting, fishing and gathering any fruits available in the forests. c) They were able to use fire which was used to cook their food and to protect them from colds. d) They lived in cave and later learned to build primitive shelter. e) They learned to developed primitive arts, personal ornaments, and other art forms. 2. Neolithic Period ( New Stone Age : 8,000- 4,000 B.C.) This started in the disappearance of the Cro-Magnun and new people which was considered as modern man. a) The development of refined stone tools and weapons. b) They made their own house. c) They learned to domesticate animals such as horse, pigs, dogs,cattle and etc. d) They learned to use wove clothes as the protection of their skin. e) They began to cut trees which was used as boat as a means of transportation and fishing in the rivers. 3. Age of Metals (4,000B.C. – 1,500 B.C.) The used of metal such as bronze, copper, and iron produced a new historical development from the cradles civilization of Egypt, Mesopotamia, Persia including the India, and China which later on spread through out Asia. The civilization which defines to a more developed social, cultural, political and economic system had spread in Middle East, Asia and even South America. It had already direct contacts through the tribes, kingdoms , empire and later on state which the constant political activities were through conquest , wars and trade. RELEVANT HUBS FOR THE ACADEMIC ARTICLES AND RESEARCHES ON SOCIOLOGY AND ANTHROPOLOGY A.CULTURE, TRADITION AND HUMAN SOCIETY The Basic Concepts of Culture in Sociology and Anthropology http://savior.hubpages.com/hub/basicconceptculture The Human Evolution and Development http://savior.hubpages.com/hub/originhumanevolution Areas of Sociology http://savior.hubpages.com/hub/Areas-of-Sociology The Characteristics of Culture http://savior.hubpages.com/hub/Characteristics-of-Culture The Concept of Beliefs in Sociology and Anthropology http://savior.hubpages.com/hub/beliefsinsocilogy The Meaning and Concept of Tradition http://savior.hubpages.com/hub/meaningoftradition The Definition of Group http://savior.hubpages.com/hub/groupdefinition Group : Its Nature, Principle and Concept in the Development of Society http://savior.hubpages.com/hub/groupsocierty B. PERSONAL AND SOCIO-POLITICAL EXPLOITATION The Social Phenomenology of Human Reciprocity and Generosity http://savior.hubpages.com/hub/The-Social-Phenomenology-of-Human-Reciprocity-and-Generosity The Defining Moment of Human Freedom and Integrity of the Arab Nations http://hubpages.com/hub/Arab-Nations The Social Phenomenology of the "Rise and Fall Syndrome" in the Arab World http://savior.hubpages.com/hub/The-Social-Phenomenology The Contemporary International Politics of the Weberian Dialectics on Religion and Social Class http://savior.hubpages.com/hub/internationalpoliticsoftheweberiandialectics Human Vanity ( Ethics, Morality and Political Culture) http://hubpages.com/hub/Human-Vanity The Easter Reminder of our Beloved Father on Greed and Vanity http://savior.hubpages.com/hub/The-Easter-Reminder-of-our-Beloved-Father-on-Greed-and-Vanity Emotional Attachment in the Power of Love http://hubpages.com/hub/Emotional-Attachment-in-the-Power-of-Love .The Family-Oriented Approach to Human Development: A Social Equity Model for “Family Empowerment” http://hubpages.com/hub/Family-Oriented-Approach 8 Common Characteristics of the Human Society in Year 2020 http://hubpages.com/hub/8-Common-Characteristics-of-the-Human-Society-in-Year-2020
Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is a grammatically correct sentence in American English, used as an example of how homonyms and homophones can be used to create complicated linguistic constructs. It has been discussed in literature in various forms since 1967, when it appeared in Dmitri Borgmann's Beyond Language: Adventures in Word and Thought. The sentence uses three distinct meanings of the word buffalo: - the city of Buffalo, New York; - the verb (uncommon in usage) to buffalo, meaning "to bully, harass, or intimidate" or "to baffle"; and - the animal, bison (often called buffalo in North America). The sentence can be phrased differently as "Buffalo from Buffalo that are intimidated by buffalo from Buffalo intimidate buffalo from Buffalo." |This section needs additional citations for verification. (August 2016) (Learn how and when to remove this template message)| The sentence is unpunctuated and uses three different readings of the word "buffalo". In order of their first use, these are: - a. the city of Buffalo, New York, United States, which is used as a noun adjunct in the sentence and is followed by the animal; - n. the noun buffalo (American bison), an animal, in the plural (equivalent to "buffaloes" or "buffalos"), in order to avoid articles. - v. the verb "buffalo" meaning to outwit, confuse, deceive, intimidate, or baffle. The sentence is syntactically ambiguous; however, one possible parse (marking each "buffalo" with its part of speech as shown above) would be as follows: - Buffaloa buffalon Buffaloa buffalon buffalov buffalov Buffaloa buffalon. The sentence uses a restrictive clause, so there are no commas, nor is there the word "which," as in, "Buffalo buffalo, which Buffalo buffalo buffalo, buffalo Buffalo buffalo." This clause is also a reduced relative clause, so the word that, which could appear between the second and third words of the sentence, is omitted. Thus, the parsed sentence reads as a claim that bison who are intimidated or bullied by bison are themselves intimidating or bullying bison (at least in the city of Buffalo – implicitly, Buffalo, NY): - Buffalo buffalo (the animals called "buffalo" from the city of Buffalo) [that] Buffalo buffalo buffalo (that the animals from the city bully) buffalo Buffalo buffalo (are bullying these animals from that city). - [Those] buffalo(es) from Buffalo [that are intimidated by] buffalo(es) from Buffalo intimidate buffalo(es) from Buffalo. - Bison from Buffalo, New York, who are intimidated by other bison in their community, also happen to intimidate other bison in their community. - The buffalo from Buffalo who are buffaloed by buffalo from Buffalo, buffalo (verb) other buffalo from Buffalo. - Buffalo buffalo (main clause subject) [that] Buffalo buffalo (subordinate clause subject) buffalo (subordinate clause verb) buffalo (main clause verb) Buffalo buffalo (main clause direct object). - [Buffalo from Buffalo] that [buffalo from Buffalo] buffalo, also buffalo [buffalo from Buffalo]. Thomas Tymoczko has pointed out that there is nothing special about eight "buffalos"; any sentence consisting solely of the word "buffalo" repeated any number of times is grammatically correct. The shortest is "Buffalo!", which can be taken as a verbal imperative instruction to bully someone ("[You] buffalo!") with the implied subject "you" removed,:99–100, 104 or as a noun exclamation, expressing e.g. that a buffalo has been sighted, or as an adjectival exclamation, e.g. as a response to the question, "where are you from?" Tymoczko uses the sentence as an example illustrating rewrite rules in linguistics.:104–105 The idea that one can construct a grammatically correct sentence consisting of nothing but repetitions of "buffalo" was independently discovered several times in the 20th century. The earliest known written example, "Buffalo buffalo buffalo buffalo", appears in the original manuscript for Dmitri Borgmann's 1965 book Language on Vacation, though the chapter containing it was omitted from the published version. Borgmann recycled some of the material from this chapter, including the "buffalo" sentence, in his 1967 book, Beyond Language: Adventures in Word and Thought.:290 In 1972, William J. Rapaport, now a professor at the University at Buffalo but then a graduate student at Indiana University, came up with versions containing five and ten instances of "buffalo". He later used both versions in his teaching, and in 1992 posted them to the LINGUIST List. A sentence with eight consecutive "buffalo"s is featured in Steven Pinker's 1994 book The Language Instinct as an example of a sentence that is "seemingly nonsensical" but grammatical. Pinker names his student, Annie Senghas, as the inventor of the sentence.:210 Neither Rapaport, Pinker, nor Senghas were initially aware of the earlier coinages. Pinker learned of Rapaport's earlier example only in 1994, and Rapaport was not informed of Borgmann's sentence until 2006. Even Borgmann's example may not be the oldest: computational linguist Robert C. Berwick, who used a five-"buffalo" version in a 1987 book,:100 claims he had heard the sentence as a child ("before 1972, to be sure") and had assumed it was part of common parlance. Versions of the linguistic oddity can be constructed with other words which similarly simultaneously serve as collective noun, adjective, and verb, some of which need no capitalization (such as "police"). - List of linguistic example sentences - Colorless green ideas sleep furiously - Eats, Shoots & Leaves - James while John had had had had had had had had had had had a better effect on the teacher - Lion-Eating Poet in the Stone Den - That that is is that that is not is not is that it it is - Semantic satiation - Neko no ko koneko, shishi no ko kojishi - Higgins, Chris (11 March 2008). "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo". Mental Floss. - Thomas Tymoczko; James M. Henle (2000). Sweet reason: a field guide to modern logic (2 ed.). Birkhäuser. ISBN 978-0-387-98930-3. - Eckler, Jr., A. Ross (November 2005). "The Borgmann Apocrypha". Word Ways: The Journal of Recreational Linguistics. 38 (4): 258–260. - Borgmann, Dmitri A. (1967). Beyond Language: Adventures in Word and Thought. New York, NY, USA: Charles Scribner's Sons. OCLC 655067975. - Rapaport, William J. (5 October 2012). "A History of the Sentence 'Buffalo buffalo buffalo Buffalo buffalo.'". University at Buffalo Computer Science and Engineering. Retrieved 7 December 2014. - Rapaport, William J. (19 February 1992). "Message 1: Re: 3.154 Parsing Challenges". LINGUIST List. Retrieved 14 September 2006. - Pinker, Steven (1994). The Language Instinct: How the Mind Creates Language. New York, NY, USA: William Morrow and Company, Inc. - Barton, G. Edward, Jr.; Berwick, Robert C.; Ristad, Eric Sven (1987). Computational Complexity and Natural Language. Cambridge, MA, USA: MIT Press. - Gärtner, Hans-Martin (2002). Generalized Transformations and Beyond. Berlin: Akademie Verlag. p. 58. ISBN 978-3050032467. |Look up buffalo in Wiktionary, the free dictionary.| - Buffaloing buffalo at Language Log, 20 January 2005 - Easdown, David. "Teaching mathematics: The gulf between semantics (meaning) and syntax (form)" (PDF). (273 KB)
Interpreting the Graph of a Function Videos to help Algebra I students learn how to create tables and graphs of functions and interpret key features including intercepts, increasing and decreasing intervals, and positive and negative intervals. New York State Common Core Math Module 3, Algebra I, Lesson 13 Plans and Worksheets for Algebra I Plans and Worksheets for all Grades Lessons for Algebra I Common Core For Algebra I Lesson 13 Exit Ticket Sample Solutions 1. Estimate the time intervals when mean energy use is decreasing on an average summer day. Why would power usage be decreasing during those time intervals? 2. The hot summer day energy use changes from decreasing to increasing and from increasing to decreasing more frequently than it does on an average summer day. Why do you think this occurs? Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
Neurogenetics studies the role of genetics in the development and function of the nervous system. It considers neural characteristics as phenotypes (i.e. manifestations, measurable or not, of the genetic make-up of an individual), and is mainly based on the observation that the nervous systems of individuals, even of those belonging to the same species, may not be identical. As the name implies, it draws aspects from both the studies of neuroscience and genetics, focusing in particular how the genetic code an organism carries affects its expressed traits. Mutations in this genetic sequence can have a wide range of effects on the quality of life of the individual. Neurological diseases, behavior and personality are all aspects of man studied in the context of neurogenetics. The field of neurogenetics emerged in the mid to late 1900s with advances closely following advancements made in available technology. Currently neurogenetics is the center of much research utilizing the cutting edge of research techniques. - 1 History - 2 Neurological disorders - 3 Gene sequencing - 4 Methods of research - 5 Behavioral neurogenetics - 6 Development - 7 Current research - 8 See also - 9 References The field of neurogenetics emerged from advances made in molecular biology, genetics and a desire to understand the link between genes, behavior, the brain, and neurological disorders and diseases. The field started to expand in the 1960s through the research of Seymour Benzer, considered by some to be the father of neurogenetics. His pioneering work with Drosophila helped to elucidate the link between circadian rhythms and genes, which led to further investigations into other behavior traits. He also started conducting research in neurodegeneration in fruit flies in an attempt to discover ways to suppress neurological diseases in humans. Many of the techniques he used and conclusions he drew would drive the field forward. Early analysis relied on statistical interpretation through processes such as LOD (logarithm of odds) scores of pedigrees and other observational methods such as affected sib-pairs, which looks at phenotype and IBD (identity by descent) configuration. Many of the disorders studied early on including Alzheimer’s, Huntington's and amyotrophic lateral sclerosis (ALS) are still at the center of much research to this day. By the late 1980s new advances in genetics such as recombinant DNA technology and reverse genetics allowed for the broader use of DNA polymorphisms to test for linkage between DNA and gene defects. This process is referred to sometimes as linkage analysis. By the 1990s ever advancing technology had made genetic analysis more feasible and available. This decade saw a marked increase in identifying the specific role genes played in relation to neurological disorders. Advancements were made in but not limited to: Fragile X syndrome, Alzheimer’s, Parkinson’s, epilepsy and ALS. While the genetic basis of simple diseases and disorders has been accurately pinpointed, the genetics behind more complex, neurological disorders is still a source of ongoing research. New developments such as the genome wide association studies (GWAS) have brought vast new resources within grasp. With this new information genetic variability within the human population and possibly linked diseases can be more readily discerned. Neurodegenerative diseases are a more common subset of neurological disorders, with examples being Alzheimer's disease and Parkinson's disease. Currently no viable treatments exist that actually reverse the progression of neurodegenerative diseases; however, neurogenetics is emerging as one field that might yield a causative connection. The discovery of linkages could then lead to therapeutic drugs, which could reverse brain degeneration. One of the most noticeable results of further research into neurogenetics is a greater knowledge of gene loci that show linkage to neurological diseases. The table below represents a sampling of specific gene locations identified to play a role in selected neurological diseases based on prevalence in the United States. |Gene Loci||Neurological Disease| |APOE ε4, PICALM||Alzheimer's Disease| |DR15, DQ6||Multiple Sclerosis| |LRRK2, PARK2, PARK7||Parkinson's Disease| Methods of research Logarithm of odds (LOD) is a statistical technique used to estimate the probability of gene linkage between traits. LOD is often used in conjunction with pedigrees, maps of a family’s genetic make-up, in order to yield more accurate estimations. A key benefit of this technique is its ability to give reliable results in both large and small sample sizes, which is a marked advantage in laboratory research. Quantitative trait loci (QTL) mapping is another statistical method used to determine the chromosomal positions of a set of genes responsible for a given trait. By identifying specific genetic markers for the genes of interest in a recombinant inbred strain, the amount of interaction between these genes and their relation to the observed phenotype can be determined through complex statistical analysis. In a neurogenetics laboratory, the phenotype of a model organisms is observed by assessing the morphology of their brain through thin slices. QTL mapping can also be carried out in humans, though brain morphologies are examined using nuclear magnetic resonance imaging (MRI) rather than brain slices. Human beings pose a greater challenge for QTL analysis because the genetic population cannot be as carefully controlled as that of an inbred recombinant population, which can result in sources of statistical error. Recombinant DNA is an important method of research in many fields, including neurogenetics. It is used to make alterations to an organism’s genome, usually causing it to over- or under-express a certain gene of interest, or express a mutated form of it. The results of these experiments can provide information on that gene’s role in the organism’s body, and it importance in survival and fitness. The hosts are then screened with the aid of a toxic drug that the selectable marker is resistant to. The use of recombinant DNA is an example of a reverse genetics, where researchers create a mutant genotype and analyze the resulting phenotype. In forward genetics, an organism with a particular phenotype is identified first, and its genotype is then analyzed. Model organisms are an important tool in many areas of research, including the field of neurogenetics. By studying creatures with simpler nervous systems and with smaller genomes, scientists can better understand their biological processes and apply them to more complex organisms, such as humans. Due to their low-maintenance and highly-mapped genomes, mice, Drosophila, and C. elegans are very common. Zebrafish and prairie voles have also become more common, especially in the social and behavioral scopes of neurogenetics. In addition to examining how genetic mutations affect the actual structure of the brain, researchers in neurogenetics also examine how these mutations affect cognition and behavior. One method of examining this involves purposely engineering model organisms with mutations of certain genes of interest. These animals are then classically conditioned to perform certain types of tasks, such as pulling a lever in order to gain a reward. The speed of their learning, the retention of the learned behavior, and other factors are then compared to the results of healthy organisms to determine what kind of an effect – if any – the mutation has had on these higher processes. The results of this research can help identify genes that may be associated with conditions involving cognitive and learning deficiencies. Many research facilities seek out volunteers with certain conditions or illnesses to participate in studies. Model organisms, while important, cannot completely model the complexity of the human body, making volunteers a key part to the progression of research. Along with gathering some basic information about medical history and the extent of their symptoms, samples are taken from the participants, including blood, cerebrospinal fluid, and/or muscle tissue. These tissue samples are then genetically sequenced, and the genomes are added to current database collections. The growth of these data bases will eventually allow researchers to better understand the genetic nuances of these conditions and bring therapy treatments closer to reality. Current areas of interest in this field have a wide range, spanning anywhere from the maintenance of circadian rhythms, the progression of neurodegenerative disorders, the persistence of periodic disorders, and the effects of mitochondrial decay on metabolism. Advances in molecular biology techniques and the species-wide genome project have made it possible to map out an individual's entire genome. Whether genetic or environmental factors are primarily responsible for an individual's personality has long been a topic of debate. Thanks to the advances being made in the field of neurogenetics, researchers have begun to tackle this question by beginning to map out genes and correlate them to different personality traits. There is little to no evidence to suggest that the presence of a single gene indicates that an individual will express one style of behavior over another; rather, having a specific gene could make one more predisposed to displaying this type of behavior. It is starting to become clear that most genetically influenced behaviors are due to the effects of multiple genes, in addition to other neurological regulating factors like neurotransmitter levels. Aggression, for example, has been linked to at least 16 different genes, many of which have been shown to have different influences on levels of serotonin and dopamine, neurotransmitter density, and other aspects of brain structure and chemistry. Similar findings have been found in studies of impulsivity and alcoholism. Due to fact that many behavioral characteristics have been conserved across species for generations, researchers are able to use animal subjects such as mice and rats, but also fruit flies, worms, and zebrafish, to try to determine specific genes that correlate to behavior and attempt to match these with human genes. Cross-species gene conservation While it is true that variation between species can appear to be pronounced, at their most basic they share many similar behavior traits which are necessary for survival. Such traits include mating, aggression, foraging, social behavior and sleep patterns. This conservation of behavior across species has led biologists to hypothesize that these traits could possibly have similar, if not the same, genetic causes and pathways. Studies conducted on the genomes of a plethora of organisms have revealed that many organisms have homologous genes, meaning that some genetic material has been conserved between species. If these organisms shared a common evolutionary ancestor, then this might imply that aspects of behavior can be inherited from previous generations, lending support to the genetic causes – as opposed to the environmental causes - of behavior. Variations in personalities and behavioral traits seen amongst individuals of the same species could be explained by differing levels of expression of these genes and their corresponding proteins. Impulsivity is the inclination of an individual to initiate behavior without adequate forethought. An individual with high impulsivity will be more likely to act in ways that are not generally beneficial or are outside the normal range of action one would expect to see. Through the use of such techniques as fMRI and PET scans, differences in impulsivity have been seen to be directly influenced by a right lateralized neural circuit. In addition, impulsivity levels have been linked to brain density levels, specifically the density of white and grey matter and levels of myelination. This suggests that there are specific areas of the brain that play a direct role in the regulation of behavior. This indicates a possible genetic correlation since all human brains have the same general anatomical make up. A 2008 study found a significant correlation between gene expression and brain structure in both model organisms and humans. The levels of expression of dopamine and serotonin in particular have been found to be very influential on brain structure. DAT and DRD4 genes, both of which code for proteins that contribute to the density of the prefrontal gray matter, also have been found to be especially significant. Individuals with ADHD, specifically those with a DRD 4/4 genotype, were found to have smaller prefrontal gray matter volume than those without the 4/4 genotype, indicating that their level of impulse control would be lower than normal. There are many other genes that can contribute to either brain density or its composition, and further studies are being conducted to determine the significance of each. Higher cognitive function Similarly to impulsivity, varying levels of cognition have been linked to many different genes, several of which are related to dopamine genes expression in frontostriatal circuitry. These genes have been seen to play a role in higher cognitive functions such as learning and motivation, possibly by acting on the reward system in the dopamine pathway. It has been shown that these factors, along with many others not related to dopamine, such as CHRM2, are highly heritable. While many executive functions can be learned through experience and environmental factors, individuals with these specific genes, particularly those with high expression levels, were shown to possess higher cognitive function than those without them. One possible explanation for this is that these genes act as high motivational factor, making these individuals more likely to either develop better cognitive function naturally or participate in activities that result in higher cognitive function by means of experience. Much of this motivation may arise from reward based learning. In this type of learning, a particular outcome is more positive than anticipated, resulting in a higher level of dopamine being released in the brain. Dopamine release was for a long time thought to result in a feeling of pleasure, causing an increase in this behavior. However, recent advances in our understanding of reward prediction and learning are leading researchers to view dopamine simply as a reward-error signal, rather than being responsible for inducing the feeling of pleasure. Over time this reward-seeking behavior will increase synaptic plasticity, resulting in an increase in neuronal connections and faster response times. There is also research being conducted on how an individual's genes can cause varying levels of aggression and aggression control. Throughout the animal kingdom, varying styles, types and levels of aggression can be observed leading scientists to believe that there might be a genetic contribution that has conserved this particular behavioral trait. For some species varying levels of aggression have indeed exhibited direct correlation to a higher level of Darwinian fitness. The effect serotonin (5-HT) and the varying genes, proteins and enzymes have on aggression is the focus of studies currently. This pathway has been linked to aggression through its influences on early brain development and morphology, as well as directly regulating an individual’s level of impulsive aggression. One enzyme that researchers believe plays a direct role in aggression control is the enzyme MAO, which is partially responsible for the degradation of serotonin and thus aggression control. The genes, as well as the proteins themselves, for the 5-HT receptor, as well as the 5-HT transporter, SERT, also have a direct effect on the level of aggression seen in test subjects. The up regulation of a specific 5-HT receptor, 5-HT1A, and the down regulation of SERT, both contribute to lowering an individual’s level of aggression. While studies have been conducted on humans, such as Han Brunner's experiment with a MAO-A deficient Dutch family, which first hinted at the possible linkage between MAO A and aggression, and was later confirmed by Isabelle Seif's mouse experiment, much of the current research is being conducted on zebrafish to identify the underlying genetic and morphological aspects that lead to aggression as well as many other behavioral traits. The study of alcoholism and the neurogenetic factors that increase one's susceptibility is a budding field of study. A multitude of genes associated with the condition have been found which can act as indicators for an individual’s predisposition to alcoholism. Improper expression of ALDH2 and ADH1B leads to polymorphism and causes these two enzymes to function improperly, making it difficult to digest alcohol. This type of expression has been found to be a strong indicator of alcoholism, along with the presence of GABRA2, a gene which codes for a specific GABA receptor. How GABRA2 leads to alcohol dependence is still unclear, but it is thought to interact negatively with alcohol, altering the behavioral effect and resulting in dependency. In general, these genes code for receptor or digestive proteins, and while having these particular genes does indicate a predisposition towards alcoholism, it is not a definitive determining factor. Like all behavioral traits, genes alone do not determine an individual’s personality or behavior, for the influence of the environment is just as important. A great deal of research has been done on the effects of genes and the formation of the brain and the central nervous system. The following wiki links may prove helpful: There are many genes and proteins that contribute to the formation and development of the CNS, many of which can be found in the aforementioned links. Of particular importance are those that code for BMPs, BMP inhibitors and SHH. When expressed during early development, BMP's are responsible for the differentiation of epidermal cells from the ventral ectoderm. Inhibitors of BMPs, such as NOG and CHRD, promote differentiation of ectoderm cells into prospective neural tissue on the dorsal side. If any of these genes are improperly regulated, then proper formation and differentiation will not occur. BMP also plays a very important role in the patterning that occurs after the formation of the neural tube. Due to the graded response the cells of the neural tube have to BMP and Shh signaling, these pathways are in competition to determine the fate of preneural cells. BMP promotes dorsal differentiation of pre-neural cells into sensory neurons and Shh promotes ventral differentiation into motor neurons. There are many other genes that help to determine neural fate and proper development include, RELN, SOX9, WNT, Notch and Delta coding genes, HOX, and various cadherin coding genes like CDH1 and CDH2. Some recent research has shown that the level of gene expression changes drastically in the brain at different periods throughout the life cycle. For example, during prenatal development the amount of mRNA in the brain (an indicator of gene expression) is exceptionally high, and drops to a significantly lower level not long after birth. The only other point of the life cycle during which expression is this high is during the mid- to late-life period, during 50–70 years of age. While the increased expression during the prenatal period can be explained by the rapid growth and formation of the brain tissue, the reason behind the surge of late-life expression remains a topic of ongoing research. Neurogenetics is a field that is rapidly expanding and growing. The current areas of research are very diverse in their focuses. One area deals with molecular processes and the function of certain proteins, often in conjunction with cell signaling and neurotransmitter release, cell development and repair, or neuronal plasticity. Behavioral and cognitive areas of research continue to expand in an effort to pinpoint contributing genetic factors. As a result of the expanding neorogenetics field a better understanding of specific neurological disorders and phenotypes has arisen with direct correlation to genetic mutations. With severe disorders such as epilepsy, brain malformations, or mental retardation a single gene or causative condition has been identified 60% of the time; however, the milder the intellectual handicap the lower chance a specific genetic cause has been pinpointed. Autism for example is only linked to a specific, mutated gene about 15-20% of the time while the mildest forms of mental handicaps are only being accounted for genetically less than 5% of the time. Research in neurogenetics has yielded some promising results, though, in that mutations at specific gene loci have been linked to harmful phenotypes and their resulting disorders. For instance a frameshift mutation or a missense mutation at the DCX gene location causes a neuronal migration defect also known as lissencephaly. Another example is the ROBO3 gene where a mutation alters axon length negatively impacting neuronal connections. Horizontal gaze palsy with progressive scoliosis (HGPPS) accompanies a mutation here. These are just a few examples of what current research in the field of neurogenetics has achieved. - Cognitive genomics - Genes, Brain and Behavior - International Behavioural and Neural Genetics Society - Journal of Neurogenetics - "Olympians of Science: A Display of Medals and Awards". California Institute of Technology. Retrieved 5 December 2011. - "Neurogenetics Pioneer Seymour Benzer Dies". California Institute of Technology. Retrieved 5 December 2011. - Gershon, Elliot S.; Lynn R. Goldin (1987). "The outlook for linkage research in psychiatric disorders". J. Psychiat Res 21 (4): 541–550. doi:10.1016/0022-3956(87)90103-8. PMID 3326940. - Tanzi, R.E. (Oct 1991). "Genetic linkage studies of human neurodegenerative disorders". Curr Opin Neurobiol 1 (3): 455–461. doi:10.1016/0959-4388(91)90069-J. PMID 1840379. - Greenstein, P; T.D. Bird (Sep 1994). "Neurogenetics. Triumphs and challenges". West J. Med 161 (3): 242–245. PMC 1011404. PMID 7975561. - Tandon, P.N. (Sep 2000). "The decade of the brain: a brief review". Neurol India 48 (3): 199–207. PMID 11025621. - Simon-Sanchez, J; A. Singleton (2008). "Genome-wide association studies in neurological disorders". Lancet Neurol 7 (11): 1067–1072. doi:10.1016/S1474-4422(08)70241-2. PMC 2824165. PMID 18940696. - Kumar, A; Cookson MR (June 2011). "Role of LRRK2 kinase dysfunction in Parkinson disease". Expert Rev Mol Med 13 (20): e20. doi:10.1017/S146239941100192X. PMID 21676337. - "Parkinson disease". NIH. Retrieved 6 December 2011. - "Alzheimer's Disease Genetics Fact Sheet". NIH. Retrieved 6 December 2011. - "Multiple Sclerosis". NIH. - "Huntington Disease". NIH. Retrieved 6 December 2011. - N E Morton (1996). Logarithm of odds (lods) for linkage on complex inheritance - Helms, Ted (2000) Logarithm of Odds in Advanced Genetics. - R. W. Williams (1998) Neuroscience Meets Quantitative Genetics: Using Morphometric Data to Map Genes that Modulate CNS Architecture. - Bartley, AJ; Jones, DW; Weinberger, DR (1997). "Genetic variability of human brain size and cortical gyral patterns" (PDF). Brain 120 (2): 257–269. doi:10.1093/brain/120.2.257. - Kuure-Kinsey, Matthew; McCooey, Beth (2006). The Basics of Recombinant DNA. - Ambrose, Victor (2011). Reverse Genetics. - Pfeiffer, Barret D, et al. (2008) Tools for neuroanatomy and neurogenetics in Drosophila. - Rand, James B, Duerr, Janet S, Frisby, Dennis L (2000) Neurogenetics of vesicular transporters in C. elegans. - Burgess, Harold A, Granato, Michael (2008) The neurogenetic frontier – lessons from misbehaving zebrafish. - McGraw, Lisa A, Young, Larry J (2009) The prairie vole: and emerging model organism for understanding the social brain. - Neurogenetics and Behavior Center. Johns Hopkins U, 2011. Web. 29 Oct. 2011. - Fu, Ying-Hui, and Louis Ptacek, dirs. "Research Projects." Fu and Ptacek's Laboratories of Neurogenetics. U of California, San Francisco, n.d. Web. 29 Oct. 2011.<http://neugenes.org/index.htm>. - "Testing Services." Medical Neurogenetics. N.p., 2010. Web. 29 Oct. 2011.<http://www.medicalneurogenetics.com>. - Congdon, Eliza; Canli, Turhan (2008). "A Neurogenetic Approach to Impulsivity". The Journal of Personality (Print) 76 (6): 1447–84. - Kimura, Mitsuru; Higuchi, Susumu (2011). "Genetics of Alcohol Dependence". Psychiatry and clinical neurosciences (Print) 65 (3): 213–25. doi:10.1111/j.1440-1819.2011.02190.x. PMID 21507127. - Popova, Nina K. (2006). "From Genes to Aggressive Behavior: The Role of Serotonergic System". BioEssays 28 (5): 495–503. doi:10.1002/bies.20412. PMID 16615082. - Reaume, Christopher J.; Sokolowski, Marla B. (2011). "Conservation of Gene Function in Behavior". Philosophical Transactions of the Royal Society B 366 (1574): 2100–2110. doi:10.1098/rstb.2011.0028. - Congdon, Eliza (2008). The neurogenetic basis on behavioral inhibition. (Print) 69 (12): 127. - Gosso, MF; Van BElzen, M.; De Geus, E. J. C.; Polderman, J. C.; Heutink, P.; Boomsma, D. I.; Posthuma, D. (2006). "Association between the CHRM2 gene and intelligence in a sample of 304 Dutch families". Genes, Brain and Behavior 5 (8): 577–584. doi:10.1111/j.1601-183X.2006.00211.x. PMID 17081262. - Waelti P, Dickinson A, Schultz W Dopamine responses comply with basic assumptions of formal learning theory. Nature. 2001 Jul 5;412(6842):43-8. - Frank, Michael J.; Fossella, John A. (2011). "Neurogenetics and Pharmacology of Learning, Motivation, and Cognition". Neuropsychopharmacology (Print) 36 (1): 133–52. doi:10.1038/npp.2010.96. PMC 3055524. PMID 20631684. - Oliveira, RF; et al, Joana F.; Simões, José M. (2011). "Fighting Zebrafish: Characterization of Aggressive Behavior and Winner-Loser Effects". Zebrafish (Print) 8 (2): 72–81. doi:10.1089/zeb.2011.0690. - Cases, O.; Seif, I; Grimsby, J; Gaspar, P; Chen, K; Pournin, S; Muller, U; Aguet, M et al. (June 1995). "Aggressive behavior and altered amounts of brain serotonin and norepinephrine in mice lacking MAOA". Science 268 (5218): 1763–6. Bibcode:1995Sci...268.1763C. doi:10.1126/science.7792602. PMC 2844866. PMID 7792602. - Alberts et al. (2008). Molecular Biology of the Cell (5th ed.). Garland Science. pp. 1139–1480. ISBN 0-8153-4105-9. - Laura Sanders (2011). Brain gene activity changes through life - Walsh, C; Engle E (2010). "Allelic diversity in human developmental neurogenetics: insights into biology and disease". Neuron 68 (2): 245–53. doi:10.1016/j.neuron.2010.09.042. PMC 3010396. PMID 20955932. - "This Week In the Journal." The Journal of Neuroscience.
Minidoka was one of ten War Relocation Authority (WRA) camps used to carry out the government's system of exclusion and detention of persons of Japanese descent, mandated by Executive Order 9066. The Order, which eliminated the constitutional protections of due process and violated the Bill of Rights, was issued February 19, 1942 following Japan's attack on Pearl Harbor on December 7, 1941. Two-thirds of the 120,000 persons of Japanese descent incarcerated in American concentration camps were American citizens, an act that culminated decades of anti-Japanese violence, discrimination and propaganda. Minidoka opened August 10, 1942, detaining persons of Japanese descent removed from Washington, Oregon and Alaska. With a peak population of 9,397, Minidoka was a regular size camp. It operated like a city with all of the pieces and parts necessary for the inhabitants to exist. Minidoka closed its gates on October 28, 1945. Here are some of the places important to Minidoka:
Stereograms Reveal Other | Dimensions of Reality Stereograms are multi-dimensional, computer-generated, graphic images that contain hidden content (images and text). The hidden content can only be seen when viewed from the proper visual and mental perspective. Stereograms contain multiple levels of reality. The surface level usually contains a variety of colors and patterns that make stereograms appear chaotic and disorganized. Once we penetrate into the deeper dimensions of the hidden content, we discover the real meaning of each stereogram. The dimensional content within a stereogram is its essence. Because stereograms exist on multiple levels, we can use them to learn to discern the hidden dimensions of the ordinary world, moving into a Higher Reality in which we have our true being. The Historical Background of Stereograms In 1838, Charles Wheatstone, a British inventor, discovered stereo vision (binocular vision) which led him to construct a stereoscope based on a combination of prisms and mirrors to allow a person to see 3D images from two 2D pictures (stereograms). A stereoscope required a special kind of dual photograph or stereo-pair painting. A number of artists, including Salvador Dali, created exceptional stereo-pair paintings for the stereoscope. Around 1849-1850, David Brewster, a Scottish scientist, improved the Wheatstone stereoscope by using lenses instead of mirrors, thus reducing the size of the contraption. Brewster also noticed that staring at repeated patterns in wallpapers could trick the brain into matching pairs of them and thus causing the brain to perceive a virtual plane behind the walls. This is the basis of single-image wallpaper stereograms. In 1959, Dr. Bela Julesz, a vision scientist, psychologist, and MacArthur Fellow, discovered the random-dot stereogram while working at Bell Laboratories on recognizing camouflaged objects from aerial pictures taken by spy planes. Dr. Julesz also invented the autostereogram as a definitive test of Binocular Depth perception. At the time, many vision scientists still thought that depth perception occurred in the eye itself, whereas now it is known that is a complex neurological process. Julesz used a computer to create a stereo pair of random-dot images which when viewed under a stereoscope caused the brain to see 3D shapes. In 1979, Dr. Christopher Tyler, a student of Julesz and a visual psychophysicist, combined the theories behind single-image wallpaper stereograms and random-dot stereograms to create the first random-dot autostereogram (also known as single-image random-dot stereogram) which allowed the brain to see 3D shapes from a single 2D image without the aid of optical equipment. Practicing to Gain the Skill of "Seeing" If you are dominant in either your left (logical) or right (visual) brain hemisphere, you will need to practice to see the dimensional content in a stereogram. By learning to synchronize your two brain hemispheres, you can develop the skill to see the multidimensional content. To find the hidden content in a stereogram, you must focus you eyes through the monitor to the distant background. This causes you to switch from "near point" to "far point" vision and will allow the hidden content to come into view. - Place your head at a distance a bit closer to the monitor than you ordinarily do and look through the screen. - Do not scan the image for details as you usually do when looking at objects. Continue to look through the image, not at any particular detail. - Let your eyes go slightly out of focus. - Now, very slowly move back from the screen and as you do so, the dimensional content will come into view. The object will snap into dimensionality. - This may take several attempts, so be patient as you practice to master this skill. Anyone can learn this skill, so don't give up. The ability to see dimensional objects is sufficient reward in itself to spend whatever amount of time it takes to master this skill. There are several ways to practice viewing stereograms: - Relax your eyes and blur the screen - the way you gaze into emptiness when daydreaming. - Look at your own reflection in the screen, and then slowly shift your attention to the image on the screen, but without changing the position of your eyes. Try not to focus on the details of the image, but look for an overall impression. - Put your face close to the screen (if you find that uncomfortable, try it with a printout on paper) and stare right through the monitor. Then slowly move away from the screen (or paper), still staring ahead. Don't focus on the picture until the 3D structure has popped out. Sterograms possess several extraordinary features: - They can be printed on paper and the multidimensional content can be seen within the flat surface. - Stereograms can be expanded vertically and horizontally and still retain their dimensional content. The Benefits of Viewing Stereograms If you frequently use a computer, then you are forced into "near point" vision as you look at the monitor screen . When looking at something that is close to you, your eyes must converge on the object and be held in this position. This often requires the eye muscles to keep the same degree of contraction for long periods of time. The result is eye strain or muscle fatigue which is often accompanied by headaches. In order to see the hidden content in a stereogram, you have to use "far point" vision. To do this you must relax the muscles around the eyes as if looking at something in the distance. This relaxation of the muscles can give relief from eye strain. Looking at a stereogram for a few moments several times a day can keep you fresh for work on your computer. Seeing Other Dimensions in Reality Trying to assist a person to "see" the dimensional elements in a stereogram is similar to helping a person learn to see the spiritual dimension within ordinary reality. One suggests that the person "expect dimensionality," "look through the surface," and "allow one's mind (and eyes) to go out of ordinary focus" -- all of which may or may not be of any assistance to the person trying to "see" the extra dimension. The fact that there is dimensionality in a flat image helps us to keep in mind that a "flat-seeming" reality also contains Higher Dimensions. In viewing the stereograms below, place each image in the center of the screen with no other image showing. This image acts as an animated stereogram and an amimated background image at the same time. Try to follow one of the moving figures while "seeing" the image as an animated background stereogram. In this document, the animated image is used as a background gif. The video below may help you to see the 3-D effect of the stereogram as it provides an image with two dots which by refocusing your eyes you can bring together and produce a third dot. The video below simulates an ordinary stereogram, so that by viewing the changing image you may be able to learn to see into the 3-D effects of stereograms. The video below changes an ordinary stereogram (which you must be able to see into its dimensions) into an animated stereogram. Wikipedia Article on Autostereograms
Water: Marine Debris National Marine Debris Monitoring Program The National Marine Debris Monitoring Program (NMDMP), conducted by Ocean Conservancy and funded by EPA, was designed to standardize marine debris data collection in the United States using a scientifically valid protocol to determine marine debris status and trends. The study analyzed marine debris from three specific sources: land-based, ocean-based, and general (marine debris that cannot be distinguished as a land-based or ocean-based source). The study was conducted over a five-year period between September 2001 and September 2006.During the five-year period, approximately 600 volunteers conducted the NMDMP surveys across the country. The surveys were conducted on 28-day intervals and covered a 500-meter stretch of beach at each study site. The volunteers collected and recorded the various marine debris items found on NMDMP data cards. Data cards were collected and processed by Ocean Conservancy and entered into the NMDMP database. The results of the study indicated that there was no significant change in the total amount of marine debris monitored along the coasts of the United States over the five-year period. However, when the data were analyzed specifically by source, the study did show an increase in general source items. General source items included plastic bags, straps, and plastic bottles. The study indicated that land-based sources of marine debris accounted for 49% of the debris surveyed nationally, in comparison to 18% from ocean-based sources and 33% from general sources. The study also found plastic straws, plastic bottles, plastic bags, metal beverage cans, and balloons to be the most abundant types of marine debris littering our coasts. EPA believes that monitoring is an important tool to address this pervasive pollution problem. NMDMP represents the first significant assessment of marine debris in the United States. NMDMP provides a scientific basis for developing future marine debris prevention efforts, including directly addressing sources of the debris. For more information on the NMDMP please see the Final Program Report: Data Analysis and Summary (PDF) (74 pp, 5.9MB, About PDF). Following the completion of the NMDMP, EPA developed a National Marine Debris Monitoring Program – Lessons Learned White Paper (PDF) (28 pp, 1008K, About PDF). The white paper identifies the strengths and weaknesses of the monitoring protocol, explains the best practices and lessons learned, and provides specific recommendations for developing future marine debris monitoring protocols.
A researcher at Rochester Institute of Technology isunraveling a mystery surrounding Easter Island. William Basener,assistant professor of mathematics, has created the first mathematicalformula to accurately model the island’s monumental societal collapse. Between1200 and 1500 A.D., the small, remote island, 2,000 miles off the coastof Chile, was inhabited by over 10,000 people and had a relativelysophisticated and technologically advanced society. During this time,inhabitants used large boats for fishing and navigation, constructednumerous buildings and built many of the large statues, known as TikiGods, for which the island is now best known. However, by the late 18thcentury, when European explorers first discovered the island, thepopulation had dropped to 2,000 and islanders were living in nearprimitive conditions, with almost all elements of the previous societycompletely wiped out. “The reasons behind the Easter Islandpopulation crash are complex but do stem from the fact that theinhabitants eventually ran out of finite resources, including food andbuilding materials, causing a massive famine and the collapse of theirsociety,” Basener says. “Unfortunately, none of the currentmathematical models used to study population development predict thissort of growth and quick decay in human communities.” Populationscientists use differential equation models to mimic the development ofa society and predict how that population will change over time. Sinceincidents like Easter Island do not follow the normal progression ofmost societies, entirely new equations were needed to model theoutcome. Computer simulations using Basener’s formula predict valuesvery close to the actual archeological findings on Easter Island. Histeam’s results were recently published in SIAM Journal of Applied Math. Basenerwill next use his formula to analyze the collapse of the Mayan andViking populations. He also hopes to modify his work to predictpopulation changes in modern day societies. “It is my hope thisresearch can be used to create a better understanding of pastsocieties,” Basener adds. “It will also eventually help scientists andgovernments develop better population management skills to avert futurefamines and population collapses.” Basener’s research was done incollaboration with David Ross, visiting professor of mathematics at theUniversity of Virginia, mathematicians Bernie Brooks, Mike Radin andTamas Wiandt and a group of RIT mathematics students. Cite This Page:
University of Phoenix Online October 01, 2010 Behavioral and Social Learning Approaches to Personality Psychologists have created a variety of theories to help explain and understand what act and behave the way they do. Among the psychologists a few of the most famous are psychologist B.F. Skinner and psychologist Ivan Pavlov. The two of them are best known for their conditioned reflex experiment which focuses on other traits of behaviorism. The social learning theory looks at how a person acts when controlled by their environment rather than be influenced by innate forces and conditioned reflexes. Compare and contrast the behavioral and social learning approaches to personality The thought behind the behavioral approach is that the environment we are in causes us to react differently, explaining behavior through observation. The behavioral approach is contradicted by the social approach which believes in learning through the observation of others. The observation of behavioral responses of an individual is often influenced by certain stimuli. Positive stimulus prompts the repetition of the behavior that leads to a favorable outcome. For example, a student who studies hard for a test and receives a 100% a test the first time, that student is likely to repeat the same process in preparation for the next test in hope of receiving another 100% (Friedman & Schustack,(2009). The psychologist also hypothesize in behaviorism also that we are born as “a Blank Slate”. Therefore concluding that out behavior is determined by environmental factors rather than genetic or biological predispositions. Some social learning theorists claim that the way that people think, plan, perceive and believe is an important part of learning. These social learning theorists argue that we learn through imitation, modeling, and observation of other people behavior. If we observe a...
If you've ever watched a whirlpool form in your bathtub or sink while draining the water, then you've witnessed the fundamentals of a tornado at work. A drain's whirlpool, also known as a vortex, forms because of the downdraft that the drain creates in the body of water. The downward flow of the water into the drain begins to rotate, and as the rotation speeds up, a vortex forms. Why does the water start rotating? There are many explanations, but here's one way to think about it. Imagine yourself as a particle in the water, suddenly pulled toward the suction that the drain creates. At first, you'd find yourself accelerating toward the drain. Then, quite literally, there's a twist. Because of your previous momentum and the number of other particles rushing toward the drain at the same time, chances are that you're going to be pushed off to one side of the point of suction when you arrive. That deflection sets you on a spiraling path into the point of suction, like a moth spiraling in toward a light. Once the spiral has started in one direction, it tends to influence all the other particles as they arrive. A very strong spiraling tendency is created. Eventually, there's enough spiraling energy to create a vortex. Vortices are obviously a common phenomenon. After all, you see them in tubs and sinks all the time. Small dust devils sometimes form when winds flow over hot deserts, and wildfires have been known to produce climbing vortices of flame and ash called fire whirls. Scientists have even observed dust devils on Mars and spotted solar tornadoes whipping out from the sun. In a tornado, the same sort of thing happens as with our bathtub example, except with air instead of water. A great deal of the Earth's wind patterns are dictated by low-pressure centers, which draw in cooler, high-pressure air from the surrounding area. This airflow pushes the low-pressure air up to higher altitudes, but then the air heats up and is pushed upward as well by all the air behind it. The air pressure inside a tornado is as much as 10 percent lower than that of the surrounding air, causing the surrounding air to rush in even faster. How do weather conditions pull the plug on atmospheric conditions? Skip to the next page to find out how tornadoes form.
|Figure 1: Shale sample in water- methane gas bubbles| The existence of oil and natural gas in shale formations across the world has long been known by experts in the oil and gas industry. Shale is one of the earth’s most common sedimentary rocks. It is a fine-grain rock composed mainly of clay flakes and tiny fragments of other minerals. Shale can be a gas reservoir, but only formations with certain characteristics are viable for exploration. Thermogenic (from the Greek word meaning ‘formed by heat’) gas forms when organic matter in shale is broken down at high temperatures, often created by burial deep underground. The gas is then reabsorbed by organic material to trap the gas within the shale. Geologists have understood for decades that shale formations are the source of oil and natural gas from “conventional” production extracted from sand and carbonate rock formations. However, the extraction of these energy resources was considered technically impossible to recover because the shale formations lack the permeability (interconnected spaces between the rocks) that would allow the oil or natural gas to flow to a well. How is Shale Gas Recovered? Horizontal drilling involves drilling a well from the surface downward to a point where the borehole is turned and the well is drilled along a horizontal plane. The figure below illustrates a cross sectional view of a horizontal well showing how the well is drilled downward to a point just above the target formation and then drilled horizontally into the shale. A horizontally drilled well exposes a greater area of the shale reservoir, which allows a greater volume of oil or natural gas to migrate into the wellbore. The greater exposure to the shale reservoir provided by a horizontal wellbore is necessary because of the low permeability of a shale formation. To recover this volume of gas in the past many vertical wells would have been drilled from the surface requiring the use of more surface land. In order to access the natural gas trapped within the shale, the formation is hydraulically fractured (see our hydraulic stimulation section). Figure 3: schematic of horizontal well undergoing hydraulic stimulation
The method used by the researchers was used in the past for the synthesis of silicate nanowires, but the growth of the nanostructures was not possible before. The main objective of the research is the creation of silicone nanotubes without template and the use of growth through vapor-liquid-solid (VLS) or solid-liquid-solid (SLS) methods. These methods had been used in the past for the creation of silicone nanowires, but the growth of porous and hollow nanostructures was not possible in them. The growth was materialized by using the combination of gold and nickel as the growth catalyst. In this research, silicone nanotubes have been formed through a highly repeatable and almost simple method. Carbon nanotubes have been created for many years but it was not possible to synthesize silicone nanotubes. On the other hand, the nanostructures have promising future due to unique properties of silicone. One of the applications of these materials can be the simple production of field effect transistors. Moreover, the nanotubes are able to unzip by electronic current that is used in electron microscopy. Therefore, it is simple to produce silicone nanobands. Results of the research have direct applications in electronics and bioelectronics. In case tubes with very small diameters and appropriate length can be produced, it is possible to produce field effect transistors with floating gate that can be controlled from inside. In this case, transistor current changes when a very small and charged creature such as DNA passes by, and it can be measured. Taking into consideration the very small size of this piece, DNA can be arranged in parallel position to complete sequencing data. Results of the research have been published in ISI-indexed journal in Nano Letters, vol. 13, issue 3, 2013, pp. 889-897.
Frogs go through various developmental stages, some of which will determine the sex of the frog. When frogs first hatch, they are tadpoles that eventually develop into the frogs we see hopping around rivers and lakes. Fertilization of the Egg Frog eggs are much larger than a single frog cell when developed. This egg turns into the tadpole that has millions of cells, but the volume from the organism to the tadpole never changes. When the sperm first fertilizes the egg, the sperm and egg become fused together. This creates a diploid zygote nucleus. Cleavage occurs when the egg starts to develop. It starts where the sperm fused with the egg, and a furrow is created that wraps around the egg. It then divides into two cells. This will all happen in a matter of a day. At the end of it all, a blastula will exist, which is a cavity that is filled with fluid. As soon as there are at least 4,000 cells in the blastula, the genes of the zygote will begin to develop. Patterning includes another process called gastrulation. During these phases, the tadpole body will begin to form. First the head and tail will take shape, then the back and stomach, and finally the sides will take shape. In gastrulation the zygote genes are dictating what the frog looks like. During differentiation, the frog embryo develops skin, muscles, blood, tissues, organs and everything else needed to survive. After the organs and body systems have differentiated themselves, the embryo will become a tadpole. Eventually, the tadpole will become a frog and will develop into either a male or a female. This is determined after the frog is in the tadpole stage, not the embryonic stage.
December 10: World Human Rights Day The World Human Rights Day (WHRD) is observed every year on December 10 to commemorate adaptation of Universal Declaration of Human Rights. Observance of the day seeks to encourage, support and amplify measures to be taken by everyone to defend human rights. The theme for this year is ‘Let’s stand up for equality, justice and human dignity’. This year, Human Rights Day kicks off a year-long campaign to mark the upcoming 70th anniversary of Universal Declaration of Human Rights. In run up to the celebration of Human Rights Day, National Human Rights Commission (NHRC) had organised a series of events in India. On this day in 1948, United Nations General Assembly (UNGA) had adopted and proclaimed Universal Declaration of Human Rights at the Palais de Chaillot in Paris. It was adopted as shared standard yard stick to protect human rights across the globe. It recognizes inherent dignity and the equal and inalienable rights of mankind as the foundation of justice, freedom and peace in the world. The Human Rights Day was formally established at the 317th Plenary Meeting of the UNGA on 4 December 1950.
Last year, we covered a radically different approach to robotics. Instead of the hard, mechanical skeletons that are features of most robots, a team was inspired by squid, and built a soft, flexible robot that ran on air. By pumping different segments of their robot full of air using a set of pre-programmed commands, the rubbery creation could flex its legs and stride across surfaces, slipping neatly under barriers when needed. But if the researchers were inspired by cephalopods like the octopus and cuttlefish, then they seemed to also have been a bit jealous of one of these creatures' other abilities: rapidly changing color to match their surroundings or make a warning display. So the team is back with a modified version of their previous robot—one that can change color on demand. The method for doing this was a straightforward variation on the technique used to propel the robot: an external compressor was used to pump material into the robot from an external reservoir. Instead of air, however, the material was a fluid that contained a variety of dyes or fluorescent molecules that gave the robot some color. The fluid went into a separate set of channels from those that propelled the robot, giving the team a great deal of flexibility. This allowed them to create patterns like the stripes shown above, which are probably closer to a zebra's than anything that would typically show up on a cephalopod. But the team also crafted some patterns that were more like a mottled patchwork, which could be more useful for camouflage (as they demonstrated on a backdrop consisting of small rocks). Although their robot is shaped like a squashed X, it can easily move while carrying an irregular sheet of tubes on top. This let the authors build more elaborate camouflage patterns, such as the one shown below. The whole process is reversible, too, so the robot could be restored to its translucent, colorless form, or have one set of colors replace another. Although these images use some pretty garish dyes, the system can use pretty much anything that will work in liquid form. It would even work with gasses. For different environments (rocks, leaves, patterned flooring), the robots were fed liquids with appropriate colors. The authors also tested fluorescent dyes, as well as materials that absorbed or reflected infrared light. All of it worked about as well as you'd expect, given that there was nothing more elaborate involved than pumping liquid through a series of tubes. There are a few obvious limitations to this approach, but they're pretty big ones. The system can only work with whatever dyes it has prepared in advance, so it can't do the huge range of arbitrary colors that many animals can generate (Although it could be possible to blend a set of dyes and get a pretty broad range). And, although a potentially infinite set of patterns of dye tubes could be crafted, any robot is stuck with whatever it was made with—you cant just go from a patchwork to stripes when desired. It's also very slow. Fluid can only move through the tubes at a set speed, and many of them wind across the surface of the robot. Completely filling out a pattern can take dozens of seconds. All of these limitations mean that this is mostly a proof-of-principle, albeit one with a very high coolness factor. The authors suggest that it could provide a useful tool for others—they specifically mention people who study animal camouflage as one group who might find uses for it, as well as seeing some potential in medical simulators. Still, it's hard to get too cynical about the robot's future prospects. With two major publications in under a year, it's clear the people behind it are busy thinking up new things to do with their platform.
Beamed Energy Propulsion Beamed Energy Propulsion (BEP) is a revolutionary technology for future space transportation. BEP vehicles are driven by power that can be beamed from a remote, reusable, and long-range source. While the majority of modern BEP techniques are based on lasers, the scope of ISBEP also covers other forms of directed energy, such as microwave and x-ray radiation. In combination with a multitude of propulsive mechanisms such as blast waves, ablation, photon pressure, vaporization, photodesorption, and alike, BEP stays on a forefront of modern physics and engineering. BEP systems provide unique propulsive characteristics which would be impossible to achieve by means of traditional, combustion-based engines. Vehicles driven by BEP will be smaller, lighter, faster, and more efficient than any currently existing means of space transportation. In addition, BEP offers new and often unique technical solutions which are unattainable by traditional means of propulsion. Potential application areas of BEP are found in astronautics from space-borne micropropulsion via terrestrial launches and laser based debris removal to interstellar missions, but also in aeronautics and micronautics, i.e., motion in microspace.
In the wake of another tragedy in the form of two bombs that exploded along the route of the Boston Marathon, parents and education experts are left wondering how to help children understand what happened and to help them grieve in an age-appropriate ways. As they did after the Sandy Hook massacre in Newtown, Connecticut, the National Association of School Psychologists released a tip sheet really providing guidance for parents on how to talk to their kids about terrorism. Says the NASP: Acts of violence that hurt innocent people are frightening and upsetting. Children and youth will look to adults for information and guidance on how to react. Parents and school personnel can help children cope first and foremost by establishing a sense of safety and security. As horrible as these events are, children need to know that acts of terrorism are extremely rare in the United States. As information becomes available, adults can continue to help children work through their emotions and help them to learn how to cope with other life challenges. It’s important to remember that whatever parents and teacher say, it is their actions that communicate the most. Therefore, it’s vital to model outward calm and control for children even if you’re not feeling that calm and control yourself. Children’s emotional responses will mirror those of their parents and teachers, so keeping a lid on anxiety is vital. It is also important to reassure the children that whatever happened before, they are currently safe as are the many adult and loved ones in their lives. Focus should also be placed on the bravery of volunteers and emergency response workers who are working to heal the sick and reunite families as quickly as possible. Remind them that trustworthy people are in charge. Explain that emergency workers, police, firefighters, doctors, and the government are helping people who are hurt and are working to ensure that no further tragedies like this occur. Let children know that it is okay to feel upset. Explain that all feelings are okay when a tragedy like this occurs. While you do not want to force children to do so, let children talk about their feelings and help put them into perspective. Even anger is okay, but children may need help and patience from adults to assist them in expressing these feelings appropriately. Tell children the truth. Don’t try to pretend the event has not occurred or that it is not serious. Children are smart. They will be more worried if they think you are too afraid to tell them what is happening. At the same time, however, don’t offer unasked for details. Let children’s questions be The NASP advises adults in charge to stick to the facts when explaining the situation to the kids – but that is advice that could just as easily apply to the media. While most outlets took into account the chaotic conditions surrounding the incident, they mainly stuck with information that was confirmed either by eyewitnesses or law enforcement officials. Not every outlet followed the same set of standards. The most egregious example came from the New York Post who claimed that the number of dead was a dozen or more, even hours after law enforcement officials said that only three people died as the result of the bombs going off. Be careful not to stereotype people that might be associated with the violence. Children can easily generalize negative statements and develop prejudice. Talk about tolerance and justice versus vengeance. Stop any bullying or teasing immediately.
Life Cycle of a Chicken (Understanding the world link) * We used an interactive non-fiction book and video clips to learn about the changes that take place in the life of a chicken. * We shared our learning in our independent caption writing. * We hope you like our illustration too! EASTER EGG TREASURE HUNT We combined out literacy and numeracy skills in this Easter treasure hunt activity. - Used positional language to talk about places to hide the plastic eggs. * in, on, on top of, under, underneath, between, behind, in front of, next to * - Wrote clues for our friends to follow to locate the eggs (independent writing skills).
The U.S. government’s policies towards Native Americans in the second half of the nineteenth century were influenced by the desire to expand westward into territories occupied by these Native American tribes. By the 1850s nearly all Native American tribes, roughly 360,000 in number, lived to the west of the Mississippi River. These American Indians, some from the Northwestern and Southeastern territories, were confined to Indian Territory located in present day Oklahoma, while the Kiowa and Comanche Native American tribes shared the land of the Southern Plains. The Sioux, Crows and Blackfeet dominated the Northern Plains. These Native American groups encountered adversity as the steady flow of European immigrants into northeastern American cities pushed a stream of migrants into the western lands already occupied by these diverse groups of Indians. Video: Native Americans Find Native American Indian Jewelry in Ritter, OR The early nineteenth century in the United States was marked by its steady expansion to the Mississippi River. However, due to the Gadsden purchase, that lead to U.S. control of the borderlands of southern New Mexico and Arizona in addition to the authority over Oregon country, Texas and California; America’s expansion did not end there. Between 1830 and 1860 the United States nearly doubled the amount of territory under its control. These territorial gains coincided with the arrival of troves of European and Asian immigrants who wished to join the surge of American settlers heading west. This, partnered with the discovery of gold in 1849, presented attractive opportunities for those willing to make the long journey westward. Consequently, with the military’s protection and the U.S. government’s assistance, many settlers began building their homesteads in the Great Plains and other parts of the Native American tribe inhabited West. Native American Tribes Native American Policy can be defined as the laws and operations developed and adapted in the United States to outline the relationship between Native American tribes and the federal government. When the United States first became an independent nation, it adopted the European policies towards these native peoples, but over the course of two centuries the U.S. adapted its own widely varying policies regarding the changing perspectives and necessities of Native American supervision. In 1824, in order to administer the U.S. government’s Native American policies, Congress made a new agency within the War Department called the Bureau of Indian Affairs, which worked closely with the U.S. Army to enforce their policies. At times the federal government recognized the Indians as self-governing, independent political communities with varying cultural identities; however, at other times the government attempted to force the Native American tribes to abandon their cultural identity, give up their land and assimilate into the American culture. The U.S. government’s policies towards Native Americans in the second half of the nineteenth century were influenced by the desire to expand westward into territories occupied by these Indian tribes. Find Native American Indian Art in Ritter, OR With the steady flow of settlers into Indian controlled land, Eastern newspapers published sensationalized stories of cruel native tribes committing massive massacres of hundreds of white travelers. Although some settlers lost their lives to American Indian attacks, this was not the norm; in fact, Native American tribes often helped settlers cross the Plains. Not only did the American Indians sell wild game and other supplies to travelers, but they acted as guides and messengers between wagon trains as well. Despite the friendly natures of the American Indians, settlers still feared the possibility of an attack. Find Native American Jewelry in Oregon To calm these fears, in 1851 the U.S. government held a conference with several local Indian tribes and established the Treaty of Fort Laramie. Under this treaty, each Native American tribe accepted a bounded territory, allowed the government to construct roads and forts in this territory and pledged not to attack settlers; in return the federal government agreed to honor the boundaries of each tribe’s territory and make annual payments to the Indians. The Native American tribes responded peacefully to the treaty; in fact the Cheyenne , Sioux, Crow, Arapaho, Assinibione, Mandan, Gros Ventre and Arikara tribes who signed the treaty, even agreed to end the hostilities amongst their tribes in order to accept the terms of the treaty. Navajo Jewelry is Celebrated Worldwide by American Indian Art Collectors This peaceful accord between the U.S. government and the Native American tribes did not last long. After hearing tales of fertile land and a great mineral wealth in the West, the government soon broke their promises established in the Treat of Fort Laramie by allowing thousands of non-Indians to flood into the area. With so many newcomers moving west, the federal government established a policy of restricting Native Americans to reservations, small areas of land within a group’s territory that was reserved exclusively for their use, in order to provide more land for the non-Indian settlers. In a series of new treaties the U.S. government forced Native Americans to give up their land and move to reservations in exchange for protection from attacks by white settlers. In addition, the Indians were given a yearly payment that would include money in addition to food, livestock, household goods and farming tools. These reservations were created in an attempt to clear the way for increased U.S. expansion and involvement in the West, as well as to keep the Native Americans separate from the whites in order to reduce the potential for conflict. History of the Plains Indians These agreements had many problems. Most importantly many of the native peoples did not completely understand the document that they were signing or the conditions within it; moreover, the treaties did not consider the cultural practices of the Native Americans. In addition to this, the government agencies responsible for administering these policies were irked with poor management and corruption, in fact many treaty provisions were never carried out. The U.S. government rarely completed their side of the agreements even when the Native Americans moved quietly to their reservations. Dishonest bureau agents often sold the supplies that were intended for the Indians on reservations to non-Indians. Moreover, as settlers demanded more land in the West, the federal government continually reduced the size of the reservations. By this time, many of the Native American peoples were dissatisfied with the treaties and angered by the settlers’ constant demands for land. A Look at Native American Symbols Angered by the government’s dishonest and unfair policies, several Native American groups, including groups of Cheyennes, Arapahos, Comanches and Sioux, fought back. As they fought to protect their lands and their tribes’ survival, more than one thousand skirmishes and battles broke out in the West between 1861 and 1891. In an attempt to force Native Americans onto the reservations and to end the violence, the U.S. government responded to these hostilities with costly military campaigns. Clearly the U.S. government’s Indian policies were in need of a change. Find Native American Indian Music in Ritter, OR Native American policy changed drastically after the Civil War. Reformers felt that the policy of forcing Native Americans onto reservations was too harsh while industrialists, who were concerned about their land and resources, viewed assimilation, the cultural absorption of the American Indians into “white America ” as the sole long-term method of ensuring Native American survival. In 1871 the federal government passed a pivotal law stating that the United States would no longer treat Native American groups as independent nations. This legislation signaled a drastic shift in the government’s relationship with the native peoples- Congress now deemed the Native Americans, not as nations outside of jurisdictional control, but as wards of the government. By making Native Americans wards of the U.S. government, Congress believed that it would be easier to make the policy of assimilation a widely accepted part of the cultural mainstream of America. More On American Indian History Many U.S. government officials viewed assimilation as the most effective solution to what they deemed “the Indian problem,” and the only long-term method of insuring U.S. interests in the West and the survival of the American Indians. In order to accomplish this, the government urged Native Americans to move out of their traditional dwellings, move into wooden houses and become farmers. The federal government passed laws that forced Native Americans to abandon their traditional appearance and way of life. Some laws outlawed traditional religious practices while others ordered Indian men to cut their long hair. Agents on more than two-thirds of American Indian reservations established courts to enforce federal regulations that often prohibited traditional cultural and religious practices. To speed the assimilation process, the government established Indian schools that attempted to quickly and forcefully Americanize Indian children. According to the founder of the Carlisle Indian School in Pennsylvania, the schools were created to “kill the indian and save the man.” In order to accomplish this goal, the schools forced students to speak only English, wear proper American clothing and to replace their Indian names with more “American” ones. These new policies brought Native Americans closer to the end of their traditional tribal identity and the beginning of their existence as citizens under the complete control of the U.S. government. Native American Treaties with the United States In 1887, Congress passed the General Allotment Act, the most important component of the U.S. government’s assimilation program, which was created to “civilize” American Indians by teaching them to be farmers. In order to accomplish this, Congress wanted to establish private ownership of Indian land by dividing reservations, which were collectively owned, and giving each family their own plot of land. In addition to this, by forcing the Native Americans onto small plots of land, western developers and settlers could purchase the remaining land. The General Allotment Act, also known as the Dawes Act, required that the Indian lands be surveyed and each family be given an allotment of between 80 and 160 acres, while unmarried adults received between 40 to 80 acres; the remaining land was to be sold. Congress hoped that the Dawes Act would break up Indian tribes and encourage individual enterprise, while reducing the cost of Indian administration and providing prime land to be sold to white settlers. Find Native American Indian Clothing in Ritter, OR The Dawes Act proved to be disastrous for the American Indians; over the next decades they lived under policies that outlawed their traditional way of life but failed to provide the necessary resources to support their businesses and families. Dividing the reservations into smaller parcels of land led to the significant reduction of Indian-owned land. Within thirty years, the tribes had lost over two-thirds of the territory that they had controlled before the Dawes Act was passed in 1887; the majority of the remaining land was sold to white settlers. Frequently, Native Americans were cheated out of their attolments or were forced to sell their land in order pay bills and feed their families. As a result, the Indians were not “Americanized” and were often unable to become self-supporting farmers and ranchers, as the makers of the policy had wished. It also produced resentment among Indians for the U.S. government, as the allotment process often destroyed land that was the spiritual and cultural center of their lives. Native American Culture Between 1850 and 1900, life for Native Americans changed drastically. Through U.S. government policies, American Indians were forced from their homes as their native lands were parceled out. The Plains, which they had previously roamed alone, were now filled with white settlers. The Upshot of the Indian Wars Over these years the Indians had been cheated out of their land, food and way of life, as the federal government’s Indian policies forced them onto reservations and attempted to “Americanize” them. Many American Indian groups did not survive relocation, assimilation and military defeat; by 1890 the Native American population was reduced to fewer than 250,000 people. Due to decades of discriminatory and corrupt policies instituted by the United States government between 1850 and 1900, life for the American Indians was changed forever.
American populism has its origins in the broad-based and fissiparous movement that emerged from the 1850s onwards. It reached its high point in the 1890s with the formation of the People's Party, which challenged the duopoly of the Republicans and Democrats but declined rapidly as a formal movement thereafter. Yet, like an event of nuclear fission, its half-life continues to be felt long after its moment of greatest energy. The vital centre of the Populist movement was the mid-West, with particular concentrations of activity in Texas, Kansas, and Oklahoma. Though primarily an agrarian phenomenon, its political impact came through forging a farmer-labor alliance. Michael Kazin, in his book The Populist Persuasion, identifies four themes that shaped the original Populist movement and all on-going forms of populism, of which the Tea Party is the latest iteration: - the first is Americanism (identified as an emphasis on understanding and obeying the will of the people); - the second, producerism (the conviction that, in contrast to classical and aristocratic conceptions, those who toiled were morally superior to those who lived off the toil of others and that only those who created wealth in tangible material ways could be trusted to guard the nation's piety and liberties); - the third, the need to oppose the dominance of privileged elites who are seen to subvert the principles of self-rule and personal liberty through centralizing power and imposing abstract plans on the ways people lived (elites were variously identified as government bureaucrats, intellectuals, high financiers, industrialists or a combination of all four); and - the fourth, the notion of a movement or crusade that engaged in a battle to save the nation and protect the welfare of the common people. Like many social movements, treatments of populism tend to be refracted through the concerns and sympathies of the historian's own time. For example, Anna Rochester, writing during World War II and herself a Marxist, envisaged the populists as proto-Communists opposing monopoly capitalism but as insufficiently radical - a view shared, incidentally by Engels in his Letters to Americans. This contrasts with Richard Hofstadter's Age of Reform. Writing in the 1950s in reaction against McCarthyism, Hofstadter's argued that the Populists were nostalgic, backward-looking petit bourgeois businessmen who were insecure about their declining status in an industrializing America. Hofstadter claimed that they were provincial, conspiracy-minded, and tended to scapegoat others, a tendency that manifested itself in nativism and anti-Semitism. His analysis is directly echoed in most treatments of the contemporary Tea Party, who are often viewed as anxious middle-class activists worried about their status in a changing world order and given to nativism and racism. While some historians have followed Hofstadter's lead, the consensus seems to be that Populism was neither predominantly socialist nor capitalist but generated a broadly republican critique of the over-concentration of "money power." The critique of monopolistic forms of power was combined with the language of the Methodist camp meetings and Baptist revivals in order to generate a powerful rhetoric with which to challenge the status quo. This synthesis of economic critique and Christian theology came to be exemplified in the thrice-Democratic presidential candidate William Jennings Bryan's 1896 "Cross of Gold" speech with its eponymous crescendo: "Having behind us the producing masses of this nation and the world, supported by the commercial interests, the laboring interests and the toilers everywhere, we will answer their demand for a gold standard by saying to them: You shall not press down upon the brow of labor this crown of thorns, you shall not crucify mankind upon a cross of gold." At the same time, consistent with their Jeffersonian vision, the nineteenth century populists developed the rudiments of a "cooperative commonwealth" consisting of a huge range of autonomous institutions, educational initiatives and mutual associations such as cooperatives in order to address their needs without being dependent on the banks or the state. Part of the what makes the Tea Party so confusing to elite commentators is that populism can be democratic or authoritarian and often combines elements of both: Huey Long, the populist Governor of Louisiana from 1928-1932, is an example of the integration of democratic and authoritarian elements. However, I would suggest that a better way to frame the different currents found in populist movements is as "political" and "anti-political." Political populism embodies a conception of politics that works to re-instate plurality and inhibit totalizing monopolies (whether of the state or market) through common action and deliberation premised on personal participation in and responsibility for tending public life. Saul Alinsky and broad-based community organizing as exemplified in the work of the Industrial Areas Foundation and CitizensUK are paradigmatic forms of this kind of populism. Within American history, Alinsky framed his approach to politics as being both Revolutionary and Tory. He was very critical of both concentrations of economic power and what he called "welfare colonialism" that kept the poor poor. At the same time he emphasised the importance of working with existing traditions and institutions. Such an approach can be distinguished from liberalism, socialism, communism and the majority of modern political theories that view tradition with suspicion and as a hindrance to an emancipatory politics not the basis of it. By contrast, anti-political populism seeks to simplify rather than complexify the political space. It advocates direct forms of democracy in order to circumvent the need for deliberative processes and the representation of multiple interests in the formation of political judgments. The leader rules by direct consent without the hindrance of democratic checks and balances or the representation of different interests. In anti-political populism, the throwing off of established authority structures is the prelude to the giving over of authority to the one and the giving up of responsibility for the many. The goal of anti-political populism is personal withdrawal from public life so as to be free to pursue private self-interests rather than public mutual interests (this seems a particular characteristic of the contemporary Tea Party movement). Personal responsibility is for improvement of the self, one's immediate family or community disconnected from the interdependence of any such project with the care of the public institutions, liberties, rule of law, physical infrastructure and natural resources that make up the commonwealth on which all depend. In short, while political populism seeks to generate a politics of the common good, anti-political populism pursues a politics dominated by the interests of the one, the few or the many. Michael Kazin wrestles with the problem of why populism underwent a "conservative capture" in America from the 1940s. Yet the narrative of decline he tells does not help us make sense of the Tea Party movement. Populism always contains political and anti-political elements, and sometimes these elements receive a greater or lesser emphasis within particular expressions of populism. We can contrast the various expressions of primarily anti-political populism, such as the Ku Klux Klan, Father Coughlin and the Coughlinites of the late 1930s, McCarthyism, Ross Perot, and latterly the Tea Party activists, with the primarily political populism of Alinsky's Industrial Areas Foundation and other broad-based community organizations such as PICO and National People's Action. This, I think, has implications for the Big Society reform agenda in the UK. England had its own peculiar form of political populism. One of the foremost scholars of populism, Margaret Canovan, sees G. K. Chesterton as a populist. Chesterton developed an account of political economy that was neither socialist nor capitalist and highly critical of both statism and what we now call neo-liberalism. In the contemporary context one could plausibly interpret Philip Blond's "Red Tory" and Maurice Glasman's "Blue Labour" visions as attempts to construct different versions of a distinctly English political populism. Within Conservatism, the Big Society vision, with its emphasis on localism, democratization and civil society, creates a space for both Blond's political populism but also for the more Tea Party-like anti-political populism of the Tax Payers Alliance. On the left, Ed Miliband has appointed Glasman to the House of Lords to develop the Labour response to the Big Society, while his brother David is committed to developing the Movement for Change, which builds on community organizing, as a way to renew Labour as a social movement. While it is unlikely that anything like the Tea Party will develop in the UK, the irony is that populist themes could point the way for the electoral renewal of Labour: for Americanism insert Englishness; for producerism, insert labour; for small government, insert a critique of the dominance of privileged elites; and for the sense of a moral crusade insert the need to protect the common life, common land, common institutions and the customary practices of ordinary working people. Luke Bretherton is senior lecturer in Theology and Politics, and convenor of the Faith and Public Policy Forum at King's College, London. His most recent book is Christianity and Contemporary Politics: The Conditions and Possibilities of Faithful Witness (Wiley-Blackwell, 2010), and he is currently writing a book on community organizing and democratic citizenship. A different version of this article will appear in The Tablet.
The common name of an insect will likely depend on where you live—or where you grew up! Red Velvet Ants (or the common name of your choice) are a typical example. To add to the confusion, this insect is a wasp despite the inference of its common name! Regardless which continent you live on or what language you speak, if you include the scientific name of Dasymutilla occidentalis (Linnaeus) for the insect pictured to the left, then everyone (scientists and home gardeners alike) would understand which insect you are referencing. For example, every so often, television, newspapers and other media blasts alerts on outbreaks of “E. coli” or “Salmonella” in contaminated meats or vegetables in the national food delivery system. Most individuals understand the significance of these two terms and why they should be aware of the potential impact that “E. coli” or “Salmonella” can have on human health. These terms refer to two important bacterial pathogens (Escherichia coli and Salmonella spp.). As you can now see, you’re likely to already be on your way to comprehending scientific names. If you are familiar with rhododendrons, then its scientific name (Rhododendron) will likewise be familiar. Most gardeners are familiar with asters and thus they would be at ease with the genus that many—but not all—asters are classified within (Aster) . There many, many examples of common names being identical or very similar to the genus name or species name. The point at hand is that scientific names serve a valuable function and they should not instill negative perceptions. Origin & Purpose of Scientific Names Every recognized species on earth (at least in theory) is given a two-part scientific name. This system is called "binomial nomenclature." These names are important because they allow people throughout the world to communicate unambiguously about animal species and plant species. This naming system works because there are sets of international rules about how to name animals and plants. Biologists try to avoid naming the same thing more than once, though this does sometimes happen. These naming rules mean that every scientific name is unique. The same name is used over the world, by all scientists and in all languages to avoid difficulties of translation. Binomial nomenclature is also referred to as the 'Binomial Classification System'. This naming system is used by scientists throughout the world. It was established by the great Swedish botanist and physician Carolus Linnaeus (1707–1778). He attempted to describe the entire known natural world and gave each distinct animal and plant at that time a two-part name. The Genus and Species Concept If the spelling of genus and species terms sounds like Greek to you . . . then you’re on track in many cases. Every species can be unambiguously identified with just two words. The genus name and species name may come from any source whatsoever. Often they are Latin words, but they may also come from Ancient Greek, from a place, from a person, a name from a local language, etc. In fact, taxonomists come up with specific descriptors from a variety of sources, including inside-jokes and puns. Scientific names sometimes bear the names of people who were instrumental in discovering or describing the species. Finally, some scientific names often reflect the common names given by people living in the region. Scientific names are treated grammatically as if they were a Latin phrase. For this reason the name of a species is sometimes called its "Latin name," although this terminology is frowned upon by biologists, who generally prefer the phrase “scientific name.” The genus name must be unique inside each kingdom (i.e., Animal Kingdom or Plant Kingdom). However, species names are commonly reused, and are usually an adjectival modifier to the genus name, which is a noun. Family names are often derived from a common genus within the family. The Value of Scientific Names Unlike scientific names, common names are not unique. Many common names may be easier to remember (and pronounce) than scientific names, but common names are not as precise. The common name of a particular insect (or other animal or plant) might apply to several very different insects. Conversely, a single species can oftentimes be known by an array of very different common names! As a result, common name usage can lead to confusion about what animal is being referred to and what their relationships are to other animals. Some Basic Guidelines for Using Scientific Names • Scientific names are usually printed in italics, such as Homo sapiens (which refers to humans). When handwritten they should be underlined. Examples: Chrysoperla carnea or Chrysoperla carnea. • The first term (genus name) is always capitalized, while the second term (species name) never is, even when derived from a proper name. • When used with a common name, the scientific name usually follows in parentheses. Example: Green lacewing (Chrysoperla carnea) • A scientific name should generally be written in full when first cited or used. Example: Escherichia coli • After a scientific name is written in full in an article, it is acceptable (and customary) to abbreviate the genus name by just using first initial and then a period to represent the genus. Example: E. coli (NOTE #1: On rare occasion, an abbreviation form has taken on a general use in everyday conversation—as in the case for the bacterium Escherichia coli which is often referred to as just E. coli as indicated earlier. NOTE #2: We elected to state the full scientific name of insects referenced in this web page as doing otherwise may cause confusion to readers.) • Some species have come to be known by multiple scientific names. In such cases, one name is chosen for the species and the other names are referred to as "synonym" or "synonyms" of the species name. • What does the “spp.” and “sp.” designations refer to? The "sp." is an abbreviation for “species.” It is used when the actual species name cannot or need not or is not specified. The plural form of this abbreviation is "spp." and indicates "several species.” Example: Chrysoperla sp. (when referring to a single species) and Chrysoperla spp. (when referring to several species within the genus). Do not be intimated by the scientific names or discouraged about the difficulty of trying to pronounce the words. From asters (Aster spp.) to zinnias (Zinnia spp.) and from chrysanthemums (Chrysanthemum spp.) to camellia (Camellia spp.), numerous plants around the home landscape or garden that gardeners may be familiar with are also known by their scientific "first" names (genus). Remember the case for E. Coli and you will better appreciate that this scientific name stuff is not an impossible study. This web site is maintained by Master Gardener Laura Bellmore, under the direction of William M. Johnson, Ph.D., County Extension Agent-Horticulture & Master Gardener Program Coordinator. All digital photographs are the property of the Galveston County Master Gardener Association, Inc. (GCMGA) © 2002-2008 GCMGA - All Rights Reserved.
How are these used in conjunction with the Fifth and Fourteenth Amendments in protecting individual rights? I would say that the fifth and fourteenth amendments are critical in speaking for those who lack a consensus based voice. The framers were rather brilliant in ensuring that the rights of the accused are included in the original Bill of Rights. They understood quite perceptively that the rights of all individuals are fairly worthless if the worst of society are not afforded the same entitlements. When one group's rights are sacrificed, the rest are not that far behind. Slippery slope arguments not withstanding, the fifth amendment's protection of the accused served as a benchmark of fulfilling the promise of the new nation. Such an establishment of due process was extended in the 14th amendment's protection for people of color, individuals who had been excluded from the Constitution at the time of framing. In recognition of these individual's rights and the overall notion of speaking for those who were not originally spoken, the 14th amendment's guarantee continues the fulfillment of the promise of a nation. I think that you are probably talking about the other amendments in the Bill of Rights when you say "these." The 5th and 14th Amendments prevent Congress and the states, respectively, from infringing on people's rights to life, liberty, and property without the "due process of law." What that has been taken to mean is that the government cannot take your liberty without convicting you of a crime. The courts have, for about 100 years, been using the 14th Amendment to say that the states must also protect the rights mentioned in the Bill of Rights. They have said that those rights are part of a person's "liberty."
Monoprinting is a form of printmaking that has lines or images that can only be made once, unlike most printmaking, where there are multiple originals. There are many techniques of monoprinting. Examples of standard printmaking techniques which can be used to make monoprints include lithography, woodcut, and etching. Types of monoprints A monoprint is a single impression of an image made from a reprintable block. Materials such as metal plates, litho stones or wood blocks are used for etching upon. Rather than printing multiple copies of a single image, only one impression may be produced, either by painting or making a collage on the block. Etching plates may also be inked in a way that is expressive and unique in the strict sense, in that the image cannot be reproduced exactly. Monoprints may also involve elements that change, where the artist reworks the image in between impressions or after printing so that no two prints are absolutely identical. Monoprints may include collage, hand-painted additions, and a form of tracing by which thick ink is laid down on a table, paper is placed on top and is then drawn on, transferring the ink onto the paper. Monoprints can also be made by altering the type, color, and pressure of the ink used to create different prints. When you create a monoprint, it is possible to copy work from separate pieces of artwork onto one monoprint. Monoprints are known as the most painterly method among the printmaking techniques; it is essentially a printed painting. The characteristic of this method is that no two prints are alike. The beauty of this medium is also in its spontaneity and its combination of printmaking, painting and drawing media. Monoprinting and monotyping are very similar. The difference between monoprinting and monotype printing is that monoprinting has a matrix that can be reused, but not to produce an identical result. With monotyping there are no permanent marks on the matrix, and at most two impressions (copies) can be obtained. Both involve the transfer of ink from a plate to the paper, canvas, or other surface that will ultimately hold the work of art. In the case of monotyping the plate is a featureless plate. It contains no features that will impart any definition to successive prints. The most common feature would be the etched or engraved line on a metal plate. In the absence of any permanent features on the surface of the plate, all articulation of imagery is dependent on one unique inking, resulting in one unique print. Monoprints, on the other hand, are the results of plates that have permanent features on them. Monoprints can be thought of as variations on a theme, with the theme resulting from some permanent features being found on the plate – lines, textures – that persist from print to print. Variations are confined to those resulting from how the plate is inked prior to each print. The variations are endless, but certain permanent features on the plate will tend to persist from one print to the next. Monoprinting has been used by many artists, among them Georg Baselitz. Some old master prints, like etchings by Rembrandt with individual manipulation of ink as "surface tone", or hand-painted etchings by Degas (usually called monotypes) might be classifiable as monoprints, but they are rarely so described.
The queen bee is the only fertile female in the bee colony, whose task is to lay eggs, thus enabling the survival of the colony, as well as keeping together the bee colony by secreting pheromones. Females (workers and queens) develop from fertilized eggs, while males (drones) develop from unfertilized. One bee colony can have only one queen bee. In case more than one queen bee hatch, the colony will divide by natural swarming. The queen bee is fertilized by drones. During her life cycle the queen bee leaves the beehive only once, when mating, and exceptionally if the swarming occurs. Fertilized queen lays eggs into honeycomb cells. Workers hatch from the majority of eggs, drones hatch from the small number and queens hatch from several specially built queen cups. The queen bee hatches from a fertilized egg and whether or not it will survive depends on the feeding. The queen bee can lay from 2500 to 5000 eggs a day. It differs from the worker bee in its appearance too. Her body is much longer and her abdomen is of a lighter (bronze) colour. Her legs are longer and her back is hairless. She does not have pollen baskets on her back legs for collecting pollen (as worker bees do) and the shape of her sting is different compared to that of the worker bee. The queen bee moves through the beehive in the direction of the Sun. In the morning it is in the eastern part of the beehive, at noon it is between the middle frames, and in the evening on the western side.
Occasionally, curious individuals want to know the origins of composting. It is difficult to attribute the birth of composting to a specific individual or even one society. The ancient Akkadian Empire in the Mesopotamian Valley referred to the use of manure in agriculture on clay tablets 1,000 years before Moses was born. There is evidence that Romans, Greeks and the Tribes of Israel knew about compost. The Bible and Talmud both contain numerous references to the use of rotted manure straw, and organic references to compost are contained in tenth and twelfth century Arab writings, in medieval Church texts, and in Renaissance literature. Notable writers such as William Shakespeare, Sir Francis Bacon, Sir Walter Raleigh all mentioned the use of compost. On the North American continent, the benefits of compost were enjoyed by both native Americans and early European settlers of America. Many New England farmers made compost as a recipe of 10 parts muck to 1 part fish, periodically turning their compost heaps until the fish disintegrated (except the bones). One Connecticut farm, Stephen Hoyt and Sons, used 220,000 fish in one season of compost production. Other famous individuals that produced and promoted the use of compost include George Washington, Thomas Jefferson, James Madison, and George Washington Carver. The early 20th century saw the development of a new “scientific” method of farming. Work done in 1840 by a well-known German scientist, Justus von Liebig, proved that plants obtained nourishment from certain chemicals in solution. Liebig dismissed the significance of humus, because it was insoluble in water. After that discovery, agricultural practices became increasingly chemical in nature. Combinations of manure and dead fish did not look very effective beside a bag of fertilizer. For farmers in many areas of the world, chemical fertilizers replaced compost. Sir Albert Howard, a British agronomist, went to India in 1905 and spent almost 30 years experimenting with organic gardening and farming. He found that the best compost consisted of three times as much plant matter as manure, with materials initially layered in sandwich fashion, and then turned during decomposition (known as the Indore method). In 1943, Sir Howard published a book, An Agriculture Testament, based on his work. The book renewed interest in organic methods of agriculture and earned him recognition as the modern day father of organic farming and gardening. J.I. Rodale carried Sir Howard’s work further and introduced American gardeners to the value of composting for improving soil quality. He established a farming research center in Pennsylvania and the monthly Organic Gardening magazine. Now, organic methods in gardening and farming are becoming increasingly popular. A growing number of farmers and gardeners who rely on chemical fertilizers are realizing the value of compost for plant growth and restoring depleted soil.
Barium takes it name from the Greek word barys for heavy. Barium was first discovered in 1774 by Carl Scheele, but was not isolated as a pure metal until 1808 when Sir Humphry Davy electrolyzed molten barium salts. The name Barium comes from the Greek word barys, which means heavy. Barium is a soft, silvery white metal, and has a melting point of 1000 K. Because of its reaction to air, barium cannot be found in nature in its pure form but can be extracted from the mineral barite. - Atomic Number = 56 - <span">Mass = 137.3 g mol-1 - <span">Electrion Configuration = [Xe]6s2 - Density = 3.51 g cm-3 Like the lighter members of its family, barium reacts vigorously with water to produce hydrogen gas and so is commonly stored in oil. Abundance and Extraction The metal does not occur free in nature but chiefly as the sulfate and carbonate. The sulfate is used in X-ray diagnostics as a contrast medium (i.e., in soft tissue like the digestive tract). There are seven stable isotopes of naturally occurring barium: 130Ba, 132Ba, 134Ba, 135Ba, 136Ba, 137Ba, and 138Ba. In total, twenty-two isotopes are known to exist, but a majority of them are highly radioactive and have relatively short half-lives. Barium sulfate (BaSO4), or barite, is the most common mineral abundant in barium. This mineral has a density of 4.5g/cm3 and is extremely insoluble in water. Uses of barium sulfate include being a radiocontrast agent for X-ray imaging of the digestive system. Barium carbonate (BaCO3) is also commonly used as a rat poison. Barium compounds (which are toxic) are also useful in pyrotechnic devices where they impart a characteristic green color. Stephen R. Marsden (ChemTopics)
below the resonant frequency point. The two points are designated upper frequency cutoff (fco) and lower frequency cutoff (fco) or simply f1 and f2. The range of frequencies between these two points comprises the bandwidth. Views (A) and (B) of figure 1-12 illustrate the bandwidths for low- and high-Q resonant circuits. The bandwidth may be determined by use of the following formulas: For example, by applying the formula we can determine the bandwidth for the curve shown in figure 1-12, view (A). If the Q of the circuit represented by the curve in figure 1-12, view (B), is 45.5, what would be the If Q equals 7.95 for the low-Q circuit as in view (A) of figure 1-12, we can check our original calculation of the bandwidth.
According to the American Academy of Orthopaedic Surgeons, about 4 million people in the United States seek medical care each year for shoulder sprain, strain, dislocation, or other problems. Each year, shoulder problems account for about 1.5 million visits to orthopaedic surgeons--doctors who treat disorders of the bones, muscles, and related structures. What Are the Structures of the Shoulder and How Does the Shoulder Function The shoulder joint is composed of three bones: the clavicle (collarbone), the scapula (shoulder blade), and the humerus (upper arm bone) (see diagram). Two joints facilitate shoulder movement. The acromioclavicular (AC) joint is located between the acromion (part of the scapula that forms the highest point of the shoulder) and the clavicle. The glenohumeral joint, commonly called the shoulder joint, is a ball-and-socket type joint that helps move the shoulder forward and backward and allows the arm to rotate in a circular fashion or hinge out and up away from the body. (The "ball" is the top, rounded portion of the upper arm bone or humerus; the "socket," or glenoid, is a dish-shaped part of the outer edge of the scapula into which the ball fits.) The capsule is a soft tissue envelope that encircles the glenohumeral joint. It is lined by a thin, smooth synovial membrane. The bones of the shoulder are held in place by muscles, tendons, and ligaments. Tendons are tough cords of tissue that attach the shoulder muscles to bone and assist the muscles in moving the shoulder. Ligaments attach shoulder bones to each other, providing stability. For example, the front of the joint capsule is anchored by three glenohumeral ligaments. The rotator cuff is a structure composed of tendons that, with associated muscles, holds the ball at the top of the humerus in the glenoid socket and provides mobility and strength to the shoulder joint. Two filmy sac-like structures called bursae permit smooth gliding between bone, muscle, and tendon. They cushion and protect the rotator cuff from the bony arch of the acromion. The shoulder is the most movable joint in the body. However, it is an unstable joint because of the range of motion allowed. It is easily subject to injury because the ball of the upper arm is larger than the shoulder socket that holds it. To remain stable, the shoulder must be anchored by its muscles, tendons, and ligaments. Some shoulder problems arise from the disruption of these soft tissues as a result of injury or from overuse or underuse of the shoulder. Other problems arise from a degenerative process in which tissues break down and no longer function well. Shoulder pain may be localized or may be referred to areas around the shoulder or down the arm. Disease within the body (such as gallbladder, liver, or heart disease, or disease of the cervical spine of the neck) also may generate pain that travels along nerves to the shoulder. Following are some of the ways doctors diagnose shoulder problems: - Medical history (the patient tells the doctor about an injury or other condition that might be causing the pain). - Physical examination to feel for injury and discover the limits of movement, location of pain, and extent of joint instability. - Tests to confirm the diagnosis of certain conditions. Some of these tests include: - x ray - arthrogram--Diagnostic record that can be seen on an x ray after injection of a contrast fluid into the shoulder joint to outline structures such as the rotator cuff. In disease or injury, this contrast fluid may either leak into an area where it does not belong, indicating a tear or opening, or be blocked from entering an area where there normally is an opening. - MRI (magnetic resonance imaging)--A non-invasive procedure in which a machine produces a series of cross-sectional images of the shoulder. - Other diagnostic tests, such as injection of an anesthetic into and around the shoulder joint, are discussed in specific sections of this booklet. Source: The National Institute of Arthritis, Musculoskeletal and Skin Diseases Last reviewed: May 2001 Copyright © 2003 Nucleus Medical Art, Inc. All Rights Reserved.